Regular readers of this blog (if there are any) know that I have some genuine enthusiasm for Research 2.0 or Connected Research or whatever we call MR in the age of Web 2.0 and social media. Yet I still find it largely undefined, mostly a collection of buzzwords still awaiting a real definition as a research method. Niels Schillewaert at Insites Consulting in Belgium is doing some of the best thinking of anyone about what this future looks like. I've recommended his stuff before and I follow that up here with the suggestion that you have a look at this video of one of his recent presentations. I could nitpick with a couple of things he says, but not with the central thrust and certainly not with what seem to be his two central points: (1) we need to think long and hard about what all of this means and (2) begin defining the the skillsets and the tools we need to do research in a fundamentally-changed world. A tall order for someone who has been doing what I do as long as I've been doing it.
Posts from August 2009
I am delighted to see Mirta Galesic's and Michael Bosnjak's 2003 study of the impact of questionnaire length on response quality in Web surveys finally make it into print, in this case the summer issue of POQ. We all believe down to our toes that long surveys are bad, but I have been disappointed on more than one occasion to see how poorly prepared we sometimes are to make the arguments against length to clients who too often want surveys that stretch beyond what we believe to be reasonable.
Mirta and Michael help to fill that gap in a well-designed experiment. I'm not going to describe the details of the design but just go right to key finding: As survey length and complexity increase response quality declines. Respondents spend less time answering individual questions, skip questions or choose non-substantive responses (like DK) more often, key fewer characters in open ends, show less response differentiation in grid-style questions, and are more likely to quit the survey completely. These behaviors become especially prevalent at around 20 minutes.
Many studies since, have replicated these findings. I expect few readers of the article will be surprised by what they see. Google 'questionnaire length and response quality' and you'll see what I mean.
The obvious question: since there is all of this empirical evidence why does survey length continue to be such a problem?
It seems like everywhere I turn the last week or so there is another call from CMOR (The Council for Marketing and Opinion Research) for input to a definition of market research. (Note that if you click the CMOR link it will actually take you to the MRA site, but that's another story.) On the face of it this doesn't sound like a bad idea were it not for the fact that not that long ago ESOMAR did the same thing and it became part of the Code of Practice published by ESOMAR and the International Chamber of Commerce. This code, I am told, has been adopted or endorsed by something like 37 industry associations in 32 countries. So why do we need to do this again? In a global industry like ours shouldn't we be focused on harmonization rather than each of us going our own way?
See this post to get a look at the ESOMAR definition.
A colleague has sent me a link to a survey and I feel obligated to pass it on. No need to worry, we are all supposed to pass it on. It's one of those. What we used to call snowball sampling.
To be honest, I don't care for the survey much but I'm passing it on for two reasons. First, my colleague thinks it likely will produce "a moderately interesting set of data" on social media use in the research industry. Second, it's the best example that I've seen in a long time of a survey that puts form before function . I found the experience to be a little like texting while driving. It reminded me a little of that driving test in the New York Times a few weeks back.
Here is the link. Enjoy it. Pass it on if you like.
A couple of months back Katie Harris down in Australia posted a comment referring to an entry in her blog--Zebra Bites. It's a blog worth reading and I've added it to the list on the right. The content is interesting and it's a much better looking blog than the one you're reading right now. (I recognize that's a low bar.) The entry to which she referred me is about the different kinds of insights we can get from online communities. It categorizes communities into into "existing" and "manufactured." I think this kind of thing is terribly important right now. We are in the process of inventing a new methodology and for that to work we need to be able to structure our thinking and our research processes, something one might argue was missing in the initial rush to online panels and surveys, something we only now are trying to sort out.
As I read Katie's stuff I was reminded of a piece on roughly the same topic that I had seen in IJMR earlier in the year. This was a piece by Niels Schillewaert, Tom de Ruyck and Annelies Verhaeghe titled, "Connected Research: how market research can get the most out of semantic web waves. These folks all work at Insites Consutling, a Belgian company that seems to always be in the vanguard of new online methods. The article sets out a taxonmy of online social media that is similar to what Katie has suggested, although they go into a lot more detail. What Katie calls "exisitng" they call "secondary connected research." Her "manufactured" they call "primary connected research." But no matter what you call it this is a very useful way to think about the space and the kinds of insights we might get along with the bias stemming from who we may be listening to. Both should be required reading for anyone interested in Web x.0.