A good idea but . . .
TSE explained

Let’s get on with it

I spent some time over the weekend putting the finishing touches on a presentation for later this week in Washington at a workshop being put on by the Committee on National Statistics of the National Research Council. The workshop is part of a larger effort to develop a new agenda for research into social science data collections. My topic is "Nonresponse in Online Panel Surveys." Others will talk about nonresponse in telephone surveys and in self-administered surveys generally (presumably mail). The workshop is part of an overall effort driven by the increasing realization in the scientific side of the industry that as response rates continue to fall a key requirement of the probability sampling paradigm is violated. And so the question becomes: what are we going to do about it?

My message for this group is that online panels as we have used them in MR so far are not the answer. As I've noted in previous posts, response rates for online panels typically are an order of magnitude worse than telephone. At least with the telephone you start out with a good sample. (Wireless substitution is a bit of a red herring and completely manageable in the US.) With online panels you start out with something best described as a dog's breakfast. Dogs-breakfast-feature While it's become standard practice to do simple purposive sampling to create a demographically balanced sample that's generally not enough. To their credit, Gordon Black and George Terhanian recognized that fact over a decade ago when they argued for "sophisticated weighting processes" that essentially came down to attitudinal weighting to supplement demographic weighting to correct for biases in online samples. But understanding those biases and which ones are important given a specific topic and target population is not easy, and it doesn't always work. So a dozen years and $14 billion of online research later the industry seems to be just weighting online samples by the demographics and stamping them "REPRESENTATIVE."

The facts would seem to be these. On the one hand, you can still draw a terrific probability sample but the vast majority of people in that sample will not cooperate unless you make the extraordinary and expensive efforts that only governments have the resources to make. On the other hand, online panels have demonstrated that there are millions of people who not only are willing but sometimes eager to do surveys, yet we've not developed a good science-based approach for taking advantage. I take hope in the fact that some people are at least working on the problem. Doug Rivers regularly shares his research on sample matching which is interesting, although I've not seen applications outside of electoral polling. GMI's new Pinnacle product also is interesting, but so far I've only seen a brochure. And statisticians tell me that there is work in nonprobability sampling in other fields that might be adapted to the panel problem.

My message to the workshop group this week is simple: "Let's get on with it."

Comments