Previous month:
January 2011
Next month:
March 2011

Posts from February 2011

Some things we should already know

Earlier in the week Jeffrey Henning (IMHO the best of the MR bloggers) served up a much-praised and frequently retweeted post on why respondents abandon Web surveys. His post does what most of the respondent engagement debate does not do and that is get down to some basic facts about what it is in surveys that turns off people who already have volunteered to do surveys. Mostly that debate has started with the Flash imperative and never looked back.

I have taken the liberty of posting Jeffrey's chart below. It shows that the biggest single cause of abandonment is uninteresting subject matter. Media downloads are a bit of a red herring because the majority of surveys don't rely on multimedia so let's put survey length as the second biggest factor.  Causes_of_survey_incompletion Now we can generalize that the main causes of abandonment are subject matter and length. Those grids that are so roundly condemned amount to a mere 15% (sorry, Andrew). I would extend Jeffrey's argument just a bit and suggest that outright abandonment is an extreme behavior and that many more complete boring and long surveys in a half-hearted way just to get the incentive.

What strikes me most about all of this is that it seems to be news to way too many people. The influence of topic and burden on survey participation has been talked about for decades. For one especially relevant discussion have a look at Nonresponse in Household Surveys by Groves and Couper. Now I know it's mostly about in-person surveys so it can't possibly teach us anything about Web surveys, but if you were to read the chapter about how survey design affects participation you might appreciate that, regardless of mode, topic salience and perception of the burden of participating (a.k.a length) are key. (Other important elements of survey design such as incentives are bridges that panel respondents already have crossed.)

The question in my mind is why don't we already know this stuff? Why do we have to keep relearning the basics? And why do we let ourselves get distracted by all of this talk about engaging people with cute gadgets and eye candy? What respondents really want is shorter surveys on topics they find interesting. The Web may change a lot of things but this isn't one of them.


Those pesky robo-polls

A new issue of Survey Practice is out and among the short articles is one by Jan van Lohuizen and Robert Wayne Samohyl titled "Method Effects and Robo-calls." (Some colleagues and I also have a short piece on placement of navigation buttons in Web surveys.) Like most people I know I have little regard for the accuracy of robo-calling as a competitor to dual frame RDD/cell phone using live interviewers and this article provides some grist for that mill. The paper looks at 624 national polls and the specific issue of Presidential approval. I'll just quote their conclusion:

" . . . while live operator surveys and internet surveys produce quite similar results, robo-polls produce a significantly higher estimate of the disapproval rate of the President and a significantly lower estimate for 'no opinion', attributing the difference in results to non-response bias resulting from low participation rates in robo-polls."

So far so good. But it reminded me of a report I'd recently seen (via Mark Blumenthal) about the latest NCPP report on pollster accuracy. In this study of 295 statewide polls in the 2010 cycle the average error Robocalls in the final outcome was 2.4 percentage points for polls with live interviewers versus 2.6 for robo-polls and 1.7 for Internet. Of course, accuracy on Election Day is not the same as accuracy during the course of the campaign. As even casual observers have noticed there is a tendency for all polls to converge as the election draws near. As this excellent post by Mark Mellman spells out, robo-polls may do well on Election Day but not so well in the weeks prior. I won't speculate as to the reasons.

But I take comfort in all of this. It's always nice to have one's prejudices confirmed.


TSE explained

I've been giving my share of presentations lately about the work of the AAPOR Online Panels Task Force, some of them to interested parties outside of the standard AAPOR constituency. Early on I always mention that we organized our literature review around the Total Survey Error Model (TSE) and I've learned that not everyone is totally clear about what I mean. How timely then that POQ has just come out with a special issue devoted to TSE. And has been their habit with special issues lately, it's available to all, subscribers and non-subscribers alike. At least have a look.


Let’s get on with it

I spent some time over the weekend putting the finishing touches on a presentation for later this week in Washington at a workshop being put on by the Committee on National Statistics of the National Research Council. The workshop is part of a larger effort to develop a new agenda for research into social science data collections. My topic is "Nonresponse in Online Panel Surveys." Others will talk about nonresponse in telephone surveys and in self-administered surveys generally (presumably mail). The workshop is part of an overall effort driven by the increasing realization in the scientific side of the industry that as response rates continue to fall a key requirement of the probability sampling paradigm is violated. And so the question becomes: what are we going to do about it?

My message for this group is that online panels as we have used them in MR so far are not the answer. As I've noted in previous posts, response rates for online panels typically are an order of magnitude worse than telephone. At least with the telephone you start out with a good sample. (Wireless substitution is a bit of a red herring and completely manageable in the US.) With online panels you start out with something best described as a dog's breakfast. Dogs-breakfast-feature While it's become standard practice to do simple purposive sampling to create a demographically balanced sample that's generally not enough. To their credit, Gordon Black and George Terhanian recognized that fact over a decade ago when they argued for "sophisticated weighting processes" that essentially came down to attitudinal weighting to supplement demographic weighting to correct for biases in online samples. But understanding those biases and which ones are important given a specific topic and target population is not easy, and it doesn't always work. So a dozen years and $14 billion of online research later the industry seems to be just weighting online samples by the demographics and stamping them "REPRESENTATIVE."

The facts would seem to be these. On the one hand, you can still draw a terrific probability sample but the vast majority of people in that sample will not cooperate unless you make the extraordinary and expensive efforts that only governments have the resources to make. On the other hand, online panels have demonstrated that there are millions of people who not only are willing but sometimes eager to do surveys, yet we've not developed a good science-based approach for taking advantage. I take hope in the fact that some people are at least working on the problem. Doug Rivers regularly shares his research on sample matching which is interesting, although I've not seen applications outside of electoral polling. GMI's new Pinnacle product also is interesting, but so far I've only seen a brochure. And statisticians tell me that there is work in nonprobability sampling in other fields that might be adapted to the panel problem.

My message to the workshop group this week is simple: "Let's get on with it."