Previous month:
March 2010
Next month:
May 2010

Posts from April 2010

A recommendation

One of the links over there on the right is to where usability guru Jakob Nielsen regularly holds forth on usability issues of all kinds. You can sign up to get regular email alerts to his Alert Box feature and it's one of these recent  alerts that caused me to write this post.  At issue is his evaluation of three newsletters from the major UK political parties.  I frankly don't care much about UK politics but I had a look because I thought it might have some useful tips I could use in email survey solicitations and login pages.  But it turns out that his post is full of all sorts of interesting things including effective use of Twitter and social network site design.  There is a lot being written in MR these days about engagement and good design.  Nielsen has been worrying about these issues for a decade and his research has yielded many insights with applicability in MR.  It's worth signing up for his alerts and keeping an eye on what he's up to.

Comparing apples

As some readers of this blog may already know, AAPOR recently released a report on online panels that was written by a task force that I chaired. One of its conclusions is that "researchers should avoid nonprobability online panels when one of the research objectives is to accurately estimate population values." In other words, we should stop using the word "representative" when we talk about survey results from Web surveys that use panels unless the panel was recruited from a probability sample. One reaction from a number of people has been to reprise the standard arguments about the traditional probability-based methods no longer being representative either due to a combination of sample frame deterioration (especially with telephone due a growing wireless-only population) and low response rates.

At this point I pause to note that this post is a personal statement and not meant to necessarily represent the views of AAPOR or the task force.

As I have said before here and in other venues, the proposition that you can't do good representative probability-based research any more is just bogus. At least in the US. The US Census Bureau does it every month with its Current Population Survey which uses a high-quality area probability sample and, the last time I looked, routinely achieves response rates in the mid 80s. And while wireless-substitution presents a major challenge to telephone interviewing most US researchers now use a dual frame sample design that draws from both landlines and cell phones to ensure representivity. With effort and diligence, response rates north of 60 percent are still possible. So the problem with these methods, I would argue, is not that they are no longer representative. The problem with them is that they cost too much and they take too long given the very real cost and time pressures MR clients feel in their businesses.

If we apply those same yardsticks to online, well, it's just not a comparison that makes any sense. In a later post I plan to look more closely at the whole issue of Internet penetration, but for now let's just go with the 78 percent or so that most often gets tossed around as representing how many US adults have access to the Internet. This number is remarkably close to the same percentage of US adults who are reachable by a landline. So it seems tough to argue that the emergence of a substantial wireless-only population invalidates telephone research but makes online a better option.

Then there is the issue of response rate. There is almost nothing published to document response rates for the various recruiting efforts that panels use. But let's imagine a panel with three million members. Again in round numbers, if the US adult population is around 200 million and 78 percent of them (156 million) are online and therefore available to recruit then we might assume that the response rate to get those three million panelists is around 2 percent. Response to individual online surveys varies widely, but let's assume that I get 10 percent on a survey which is not unusual and with some panels devoutly to be wished for. That makes my overall response rate .2 percent (10 percent times 2 percent).

The argument here is that we use online panels because they are faster and cheaper, not because they produce better data than traditional methods. We also use them because we can do cool applications that are hard to execute in those traditional methods. We can learn a great deal with online surveys and generate a lot of interesting insights for our clients. It's just that accurate estimates of population values is not one of them.