When I was thinking about Advance, I decided that I wanted to automate a lot of my data collection. I made this decision for a variety of reasons, some theoretical (if I keep the data collection low-key, it won’t interfere with player behavior) and some practical (more time spent on R&D, less on entering and coding data). Most important, I wanted to be able to distribute my game online and reach a broad population, but still be able to collect sophisticated and subtle data.
Turns out this is actually hard. Who knew?
I’m not surprised that automated online data collection has its own set of challenges, but I’m a bit surprised by what some of those challenges have been. I keep running into fairly simple things I want to do that aren’t well-supported by existing tools. Counterbalancing presentation of tasks. Randomizing subject assignment to research condition. Conditional pre- and post-test support. Complex tasks combined with surveys. Some tools have some of these features, but I haven’t found any that have all of them.
Enter Headlamp Research, which is designing tools for people to do research online. They haven’t released their toolset yet, but I just took their “What do we need to be doing?” survey and was really impressed. If nothing else, they’re asking all the right questions.
They’re not relevant for the work I’m doing on Advance, because I’m rolling my own tools into the game itself – but by the time I’m ready to begin a new study, it seems like they’ll have some really powerful tools available, plus a user population already in place. Self-selection effects could be problematic (after all, you’re only testing the kinds of people who sign up to do research online!) but I find their approach really inspiring.