Previous month:
February 2009
Next month:
April 2009

Posts from March 2009

New report on wireless substitution

Kate Harris has pointed out to me that CDC  just released a new report on the prevalence of wireless only households in the US, and this time they are providing state level estimates.  The variation across states is dramatic to say the least.  Oklahoma is estimated to have a whopping 26.2 percent of its households unreachable via a landline phone while Vermont is at just 5.1 percent. 

These estimates are based on 2007 survey data when the national estimate was 14.7 percent of households.  In 2008 that estimate rose to 17.5 percent.

Certainly intriguing and maybe a transformation

Arguably the most interesting session at last week's WARC Online Research Conference was led by Joel Rubinson from the ARF.  Over about the last year or so he has been involved in ongoing dialogue with major clients about their frustration with MR.  They see us as unable to paint the big picture in a compelling way that helps them to really understand their markets and how to act in order to be successful in them. We have become so good at distilling things down into bloodless numbers and mathematical models that we are not seeing markets any more as composed of human beings making very human and often idiosyncratic buying decisions.  (This from a guy who was quick to point that that he was trained as and generally has made his living as a modeler.)

So what is the way out of this?  For starters we need to do more “listening” which seems to come down to finding ways to harvest online communities, social networks, blogs and all of the other ongoing conversations that are part of the daily noise on the Internet.  As Joel likes to put it, we are looking for the answer to “the unasked question.”    Exactly how we do this here it ought to be our top priority as an industry. 

One clear implication is that we need a new set of skills from different disciplines if we are to do this kind of work.  There was talk of behavioral economists, all favors of psychologists, and even cultural anthropologists.  It seems to veer into territory that mostly European researchers have been talking a lot about over the last few years—measuring emotions and semiotics—along with various kinds of word of mouth measurements that were the rage in the US a couple of years back. 

The bottom line seems to be: we have got to do this but we are still figuring out how.

The transformation part seems to me to be a more familiar theme and must be especially so to British ears attuned to what people like David Smith and Andy Dexter have been saying for quite some while.  Joel described the need to consider a variety of “data feeds” bridging both traditional and new methodologies that we then synthesize into an overall story that gives our clients a richer and more insightful understanding of their market than anything they might get out of a single study relying on a single methodology.  This sounded to me an awfully lot like the “holistic” research methodology that is probably best expressed in Smith and Fletcher’s book, The Art and Science of Interpreting Market Research Evidence.  Or perhaps I misunderstand. 

Still, the best part of this in my view is that we are finally talking about a transformation in MR that doesn’t involve throwing out all other methods and charging headlong into this new method as the only way to do research.  We are adding to the toolkit rather than replacing it.

In the end, the goal remains that holy grail of MR: a seat at the table.   The idealist in me sees this as part of the admirable wish to see the research we do for our clients put to work.  The cynic worries that most of us don’t really want to be researchers at all; we want to be consultants.

Mobile Research Conference 09 - Day 2

I’m sure you’ve noticed that 3 weeks have elapsed and I’m only day 2. Well, a few things intervened. Though I’d sum up the rest of the conference – which ran the gamut from strong presentations to a fascinating but somewhat tangential soliloquy on response rates – in very simple terms. What did I learn you should and shouldn’t do in mobile research?


1.       Use on immediate, emotive or recall, and location-specific topics. Examples of ideal mobile survey question: How attractive is that package you just picked up at Tesco? Which advertisement do you remember from the commercial break that just ended and did you like or dislike it? Ipsos found that ad recall was more accurate in mobile vs. web surveys though verbatim were much less rich.

2.       Expect to get nearly all of your responses very quickly and if you don’t want that, stagger release of the sample.

3.       Recognize that unlike other modes, it costs respondents to participate (not just their time), even down to a per-response cost if you use SMS.

4.       Consider appropriate incentive structure that compensates for cost of participating (SMS or data plan costs) over and above the typical remuneration for time/trouble.

5.       If working with a panel provider, determine whether their mobile panel is purpose-built or simply regular panelists who opted in to mobile surveys.

6.       Use an enhanced invitation (see Ipsos paper for example): personalized, emphasizes that participation is free, states purpose of study (Total Design stuff). This resulted in higher response rate and more meaningful verbatims (though it had no effect in an email invitation to a web survey run in parallel, and a subsequent GlobalPark study contradicted the findings on rich verbatim).

7.       Think about how a mobile survey could integrate or converge with a social network so that you can combine a conversation or co-creativity exercise (network) with brief, point-in-measures (mobile survey).

Should not

1.       Use gratuitously. There is no firm evidence that mobile respondents will be more engaged than in other modes/panel types, or that overall data quality or time-to-insights are improved. Examples of really successful implementation are few and far between.

2.       Expect to get older people or broadly representative consumer samples out of mobile panels. That being said, every mobile panel vendor said his or her panel is underutilized at this point so there is room for experimentation.

3.       Give people mobile survey as one among many survey mode options. It is fundamentally different from other modes, confuses people and lowers response rate to have too many options.

4.       Field SMS surveys (as opposed to web-based mobile surveys) if you can avoid them, though the invitation trigger may be an SMS. They are easier and reach a broader base of respondents, but trigger major concerns about cost and are tough to integrate on the back-end since each response*respondent combination is a separate record.

5.       Tell people it won’t cost them to participate once and expect they’ll remember that. In experiments and field results shown, cost to participate was a major concern no matter how often respondents were reminded that it was free. In the Ipsos study, even though SMS was free, half of those dissatisfied with the experience focused on cost.

6.       Expect response rates from mobile panels to be significantly higher than from “normal” internet access panels. Cited response rates were mostly in the 10% to 15% range.

Reporting made easy

I have been in London attending the WARC Online Research Conference.  I started a report and then realized that there was at least one blogger at the event and so he has made my report on Day 1 easy.  Here is the link.  I expect the link to Day 2 will appear at any moment since he, unlike me, is being paid to do this.

Despite other reports I will have some posts on specific sessions over the next several days.  There was a lot to consider.