Previous month:
May 2010
Next month:
July 2010

Posts from June 2010

Some worthwhile advice on questionnaire design

I've been carrying two journals around in my backpack for what seems like forever, hoping that I could find the time on an airplane or some other dead time to write something meaningful about two different articles on questionnaire design. But I've given up and now I am just going to refer you to them.

The first is in IJMR, is authored by Petra Lietz, and is titled, "Research into questionnaire design: a summary of the literature." It's a vast literature and no one should be expected to seriously overview it all in 25 pages but she hits many of the key design issues noting where there is disagreement or controversy among the leading methodologists. The second is in Social Science Computer Review and is by Paula Vicente and Elizabeth Reis. It carries the intriguing title, "Using Questionnaire Design to Fight Nonresponse Bias in Web Surveys." The authors do a nice job of reviewing the academic literature on Web questionnaire design.

Now that we have figured out that bad questionnaire design is a major reason if not the most important reason for bad respondent behavior articles like these are must reading. But please, don't stop there!

MROCs to the rescue!

A colleague of mine likes to say that real difference between the new MR and the old MR is that the new MR is a lot better at marketing. I thought of that when I saw this item from Research-Live describing the latest marketing push from Communispace. Now I have nothing against these folks or the MROCs they sell. By all reports they are very good at what they do and communities have clearly demonstrated their value as a research method. But I don't think that we serve the broader interests of the research industry by trashing other methodologies as a way to advance a particular business model. Now to complement other zingers like "engagement trumps sample size" we have "trading purity for pragmatism" and "decoupling 'quality' from purity."

To be fair, their marketing piece on this topic claims to support "an integrative model" of quant and communities. But the characterizations of the former throughout have that do-you-still-beat-your-wife kind of quality to them. The adjectives tell the story. Quant is artificial, backward looking, distant, controlling, authoritarian, generic, dry, top-down, alienating, artificial, and something called "researcher-centric." Communities, on the other hand, are relevant, authentic, engaging, purposeful, forward-looking, natural, agile, pragmatic, and "humanistic, person-centered." And then there are the callout quotes from clients saying basically that their communities are all they need.

There's also a matter of facts. "There is plenty of evidence showing that, except for a few discrete segments, the Internet population in the U.S. is quickly becoming the general population. . . Online versus offline is quickly becoming a non-issue." Really? I've blogged on this before. Internet penetration in the US has pretty much stalled over the last five years with a quarter to a third of the population still offline. According to the latest data from Pew, only about a third of adult US Internet users have joined a social network site, a likely indicator of willingness to participate in an MROC. The last time I looked, which admittedly was over a year ago, barely half of the countries in the EU were above the 50-60 percent threshold that ESOMAR describes as the tipping point for acceptance of online research. And with the exception of Brazil (36 percent penetration), the BRICs are still struggling to get north of 30 percent. It's one thing for clients to say that they don't care about these people because they don't have any money to spend but that's a different argument.

In the end, this is just a recycling of the ongoing debate about quant versus qual which shouldn't be a debate at all. We need them both along with whatever other information we can turn up to help a client with a business decision. I am reminded of something Ray Poynter said in his master class on communities at last year's ESOMAR Congress in Montreux: "If you test a new concept with your community and they hate it, then forget about it. If they like it, then go out and do more research."

Staying connected to reality

I spent the better part of the last three days sitting in a room with representatives from research associations in 10 countries going through the ISO 20252 section by section if not word by word.  We had two overarching goals: (1) to make the standard independent of any specific data collection technology and (2) to incorporate the experience that comes from having certified almost 300 companies worldwide.  I confess that I also was doing my usual monitoring of Twitter and LinkedIn, imagining that I was somehow staying connected to the "real world."  That world generally doesn't have a whole lot of respect for the work we were doing in Toronto.  It thrives on the latest factoids about people's use of social media, airy statements from big CPG research buyers about creativity and insight and the futility of asking questions to learn about what people think. Maybe I'm following the wrong people, but there seems to be precious little talk about best practices, about gathering information in a disciplined way, and clearly walking clients through the steps we took to get to those insights we're delivering. You can't blame me for wondering if my time in Toronto was being well spent.

That's the point at which I reminded myself of Daniel Patrick Moynihan's famous admonition, "Everyone is entitled to their own opinion but not their own facts."  Call me naïve, but I think the primary goal of research still is to help clients make evidence-based business decisions.  Gathering that evidence in a systematic and transparent way so that others (for example,  clients) can evaluate it as we have evaluated it and follow our reasoning to the insights we're offering is essential. 

You often hear it said these days that research is part science and part art.  ISO is focused on the science part and you really have to wonder why some people find it so scary. 

Guest Post: Social Media Conference Part 2

Part 2 of my thoughts from the recent IIR Social Media/Communities conference, focused on how communities are being used by the corporate research function in place of, or in addition to, "traditional" methods.

As an aside, I used quotation marks above because the notion of "traditional" vs. "next generation" research was a commonplace at the conference. But it's also an example of sloppy thinking in my opinion. There was no consistent definition of what "traditional" means, other than that it is somehow not quite as neat-o as "next generation" MR, a crude de-positioning which doesn't exactly advance our art.

Most of the social media-based research examples were in addition to, rather than replacing, other methods research. A typical example was Paula Alexander (Burt's Bees) remark that her community had replaced the $30K package tests they used to do, but otherwise was "another qualitative data point" in a suite of various methods of engaging with customers.

The only example of a near-wholesale replacement of other methods by communities was Dawn Lacallade from Solar Winds – and a fascinating case study it was. As a very young company whose customers are uniformly tech-savvy IT professionals, Solar Winds had the advantage of centering its corporate research function on communities from the beginning.

Solar Winds' community (Thwack) is the centerpiece of most of its insight generation, though the company also recourses to surveys. Thwack is a large, active community and Dawn gave compelling examples of how they had been able to integrate community insights into product development to speed time-to-release and reduce costs.

Other themes I heard from multiple community managers:

  • Long-term communities eventually resolve to around 90% non-active participants or "watchers", 9% light contributors and 1% heavy contributors regardless of topic, country, etc. This may not be as bad as it sounds. By making participation easy you can use watchers to validate insights from heavier users. You can get light but direct feedback from light contributors, and have deep and direct engagement with heavy contributors.
  • MR community results are immediate, whereas social media monitoring results are ruminative, should be sat on and considered despite their immediate availability. Nevertheless there are huge risks to treating community results as too immediate or quantifying them quickly, before they have been thought through by the community manager/researchers.
  • If using a community for new product development, confidentiality issues are unavoidable. As Caroline Dietz of Dell remarked, any time you are testing a product you are taking a risk. You just have to accept that competitors are observing, and shame on you if they learn first. You also you need to have a mechanism to take ideas to a private forum for further iteration with engaged community members.
  • Theo Downes-Le Guin