Previous month:
May 2011
Next month:
July 2011

Posts from June 2011

Waiting for mobile

Just about a year ago I wrote a post I called "Waiting for mobile" that somehow never made it online. The genesis of that post was a graphic I'd seen from allaboutsymbinan.com that was built from comScore data. The data seem to say that while smartphone use was rising rapidly, most of their owners were not using them to access the Web.

Now comScore has released new numbers for 2011. There are about 300 million mobile subscribers in the US and comScore says that around 75 million of those are now using smartphones. That equates to about one quarter of the total number of mobile phones. The same comScore report provides some use statistics for various online behaviors such as web browsing (39%), downloading apps (38%) and accessing blogs and social network sites (28%). So it seems that a majority of smartphone users still are not going online or, if they are, it's rare.

You can quarrel with the numbers if you like; I don't know enough about the methodology behind them to say how accurate they are. And you can do the math in various ways but it seems to me that it's tough to argue that more than about 15% of the US population is sufficiently active online with their smartphones to be potential survey respondents. The one exception to all of this is text messaging, but most of the mobile enthusiasts I talk to take a dim view of the potential of SMS for serious research.

So I think that mobile continues to be a niche methodology, an intriguing one but a niche nonetheless. I have seen some really interesting uses of mobile around major events and in ethnographic studies, but I think it has some serious weaknesses for gen pop studies and anything other than some very unique target groups. Recognizing those weaknesses, staying focused on how to execute well and then fitting mobile into the broader spectrum of methodologies is where we need to put our energies. It could well be the next big thing, but not yet.


Here we go again

That was my first reaction when I read a press release from Gongos Research that starts off by saying, "a new study proves that smartphone-based survey data is statistically comparable to online survey data." (My emphasis added.) I would have been a lot more comfortable with this if instead of "proves" they had said "shows" and instead of "is statistically comparable" they had just said "can be statistically comparable." But they are certain. This study proves it. Like gravity, it's not just a good idea, it's the law.

Ray Poynter has already pointed out that one study does not prove anything. Einstein agrees with Ray having once said that "no amount of experimentation can ever prove me right but a single experiment can prove me wrong." Einstein (To be fair, I don't think he was necessarily commenting on the Gongos release.) A colleague of mine pointed out that given the comparison is to online you could dismiss this as damming with faint praise. My worry here is that we are entering into a new era that is a sorry replay of the early years of online where its evangelists made all sorts of claims based on a handful of poorly-understood studies only to discover a few billion dollars of research later that there were some problems we had overlooked and online, at least as it was being practiced, was not all that it was cracked up to be. Some of these problems we have yet to solve. But we're working on them.

I think this happens because MR is first and foremost a business and therefore making money is the first priority; doing good research comes in second. Creating competitive advantage is key and one sure way to do that is to feature a cool new methodology with a dose of empirical research to demonstrate its validity. The upside here is that it encourages innovation and creating thinking. The downside is the confusion, disappointment and skepticism that it creates among clients. I don't mean to single out Gongos; they are neighbors and nice people, some of whom I know personally. This is an industry issue.

There is a better and more reasoned way to go about this sort of thing and it relies on building a theoretical framework that specifies under what conditions a methodology works well and when it works poorly. I recommend to you an interesting paper by Carlile and Christensen on the process of building theory in management research. Briefly, their version of the scientific method starts by collecting lots of obsScientificmethodervations then categorizing them based on outcomes and on the properties of those observations that might explain those outcomes.  We don't do one study and scream "Eureka!" It's an ongoing cycle of replication and new experimental designs. Applied to the case at hand we might look a whole range of mobile studies, their target respondents, the sample, the study topic, the details of execution, the validity tests used, etc. We might build a body of research and from that  develop something we might call "a theory" about when mobile is the right choice and how to use it effectively. Doing so would enhance the quality of the research done in MR.

Unfortunately, we've not really done that with online and so it probably won't happen with mobile either.  After all, we have businesses to run. Caveat emptor.


A Literary Digest moment

Last week McKinsey released results from a study of the impact of US healthcare reform (The Affordable Care Act) on employer-provided health insurance. The report estimated that 78 million workers were likely to lose their employer provided health insurance once the law kicks in fully in 2014. This estimate is significantly at odds with other studies including those by the Congressional Budget Office, RAND and The Urban Institute, all of whom estimated an effect that was an order of magnitude smaller than McKinsey's. This was big news and anti-reform professional pols pounced on it. Others wondered about the differences and asked McKinsey to release the details of their methodology. At first McKinsey refused, claiming that the methodology was proprietary. A few days later they fessed up.

It turns out their "proprietary methodology" is one that many readers of this blog probably use all the time: a B2B online survey using an access panel. Whoops And while they "stand by the integrity and the methodology of the survey" they also note that comparing their results to those of the CBO and other studies is comparing "apples and oranges."  They didn't really mean to do that but understand "how the language in the article could lead the reader to think the research was a prediction, but it is not." Shame on those readers! Here is what McKinsey said in the article:

The Congressional Budget Office has estimated that only about 7 percent of employees currently covered by employer-sponsored insurance (ESI) will have to switch to subsidized-exchange policies in 2014. However, our early-2011 survey of more than 1,300 employers across industries, geographies, and employer sizes, as well as other proprietary research, found that reform will provoke a much greater response (p. 2).

I'm not so naïve that I don't understand that all McKinsey is doing here is promoting their consulting business. The bigger the problem seems the more urgent the need for high-priced consultants to fix it. Nonetheless, there are two important lessons for us as researchers. The first is old news, still another reminder that there is no scientific basis for how most of us practice online research. The risks of getting the wrong answer probably are greater with online B2B than with consumers. The second is the importance of asking David Smith's Killer Question Number 3: "Does the new incoming evidence square with your prior understanding of this subject?" In other words, do your results jibe with whatever else we know from other studies and other data sources? If they don't, then you have some digging to do. McKinsey knew their results contradicted what others had done with much more rigorous methodologies and then flaunted it rather than go back and take a closer look at what they were doing.

The question now is whether this cautionary tale will have any impact at all on how the public views the credibility of online or how we practice it. I would like to think so but that's probably wishful thinking.


Cell phone data quality

My first taste of a methodological imbroglio was 25 years ago and involved the introduction of CAPI (computer-assisted personal interviewing). There was widespread speculation that interviewers using laptops for in-person interviewing might lead to unforeseen impacts on data quality. Empirical research taught us that we needn't worry and so CAPI became the standard.

More recently as the growth in wireless only households has made it necessary to include cell phones in our telephone samples there has been a lot of worry about the quality of data collected by cell. Poor audio quality, an increased likelihood for multitasking while being interviewed, and the possibility of environmental distractions are some of the things that people cite as possible causes for reduced data quality. Now research by Courtney Kennedy and Stephen Everett reported in the current issues of POQ has turned up little empirical evidence that such effects exist. They randomized respondents to be interviewed either by cell or landline and then looked at six data quality indicators—attention to question wording, straightlining, order effects, internal validity, length of open end responses, and item nonresponse. They found no significant differences in five of the six. The outlier was attention to question wording where they found some evidence that cell phone respondents may listen less carefully to complex questions than those responding by landline.

It's gratifying to know that there are researchers out there who approach new methods with skepticism and take a guilty-until-proven-innocent position. More gratifying still that other researchers do the hard work of carefully vetting those concerns with well-designed empirical studies.


Can we really do two things at once?

Like most research companies mine now routinely includes cell phones in our telephone samples. Best practice requires that before we interview someone on a cell phone we determine if it's safe to do the interview. If, for example, the respondent is driving a car we don't do the interview. Yesterday someone asked me if it was ok to do the interview if the respondent is using a hands-free device. Bmw-rallies-with-dot-on-distracted-driving-thumb-28065_1 The research on this is pretty clear: the problem with cells phones and driving is the distraction, not the dexterity required to hold a phone in one hand and drive with the other.  There is no basis for making an exception for hands-free.

This reminded me that responding to survey questions is not easy; it takes some serious cognitive energy. Most researchers accept the four-step response process described a decade ago by Tourangeau, Rips and Rasinski:

  1. Comprehension—understand the question and how to answer it (instructions)
  2. Retrieval—search memory to form an answer
  3. Judgment—assess completeness and relevance of the answer
  4. Respond—map the response onto the right response category

When respondents execute this process faithfully we say they are engaged. When they short-circuit it we talk about lack of engagement. A person talking on a cell phone while driving can either drive or engage with the survey. It's a rare person who can do both well simultaneously.

Which brings us to one of my favorite subjects: respondent engagement and rich media (aka Flash) in Web surveys. What is the rationale for arguing that dressing a Web survey up with more color, pimped-up radio buttons, a slider bar, or a slick drag and drop answering device is going to encourage respondents to execute the four-step response process with greater care than if we just show them the same kind of standard screen they use to enter their credit card details on Amazon? Or are unfamiliar interfaces just a distraction that makes it even less likely? It might get someone to the next question, but is that enough?