Previous month:
October 2007
Next month:
December 2007

Posts from November 2007

So what is Web 2.0 anyway?

It's hard to go to an MR conference these days without at least one presentation about Web 2.0.  Everyone seems to agree that MR needs to find a way to use user-generated content that is the staple of Web 2.0 but the industry continues to struggle with how best to do that.  For some examples of how MR is struggling with this have a look at this post at Customer Listening.  Before you do that you might want to check out a few links to some videos that may help you understand what the fuss is all about.

First up is  Tim O'Reilly who is generally credited with coining the term Web 2.0.  O'Reilly's explanation is about as succinct as you can get.  For a more detailed explanation check out this video from U Tech Tips. Finally, there is what pretty much everyone I know views as the coolest thing yet to capture what Web 2.0 is all about.  If you haven't seen The Machine is Us/ing Us you need to.


The Changing Panel Landscape

Over the last couple of weeks I have been in two different forums where there was heavy discussion about panel quality and the future of online research. The first of these was the ESOMAR Panels Conference which, despite being in the US, had mostly Europeans presenting. The second was the Respondent Cooperation Forum, the same event where last year Kim Dedeker from P&G delivered her bombshell: "Two surveys a week apart by the same online supplier yielded different recommendations . . . I never thought I was trading data quality for cost savings."

There certainly was no shortage of interesting presentations and discussion across these two events. Now having had a chance to let it all sink in I am convinced (more or less) of four things.

First, panel data quality is rapidly becoming yesterday's story. Panel companies realize they must convince their clients that they are doing the right things, especially where fraudulent respondents are concerned. They recognize that there is a new level of "regulation," either formal (like ISO) or informal (like the expected ARF standard), and they recognize that they need to cooperate with clients who bring them tangible evidence of "bad respondents." Researchers and clients alike are now routinely doing the kind of data cleaning that Theo Downes-LeGuin initially championed way back in 2005 and described in two recent papers he and I have written.

Second, design is hot! Panel vendors are pushing the so-called inattentive problem (a.k.a. satisficing) back on researchers arguing that questionnaires are too long, too badly written, too uninteresting to many respondents, and too hard to complete. As a consequence, many respondents just zone out and click their way through them. There is also begrudging acceptance on the part of the users of panel data that even panel respondents are in short supply, that we can't just keep throwing them back, and improving online questionnaire design is getting new recognition as the way to better, more valid data.

Third, online communities are hot! Some people see these as the next step after panels, especially as a natural evolution for proprietary panels of a client's customers. These allow clients to quickly do surveys, get qualitative feedback, and watch their customers interact about their products. They provide genuine collaboration with customers.

Fourth, no one is quite sure yet what to make of Web 2.0. Neither do I, at least in terms of use for MR. I just hope I don't find myself advocating surveys of respondents in Second Life any time soon.

All in all, we continue to live in interesting times.