One of the best papers at the ESOMAR Barcelona conference was presented by Brian Fine and his colleagues from AMR Interactive (an Australian research company) titled, "Attitudinal Differences: Comparing People Who Belong to Multiple Versus Single Panels. " I use the word "best" because it was one of two papers so designated by the Programme Committee. And I use the term "professional respondents" because increasingly that's the term being used to describe people who belong to multiple panels and who do lots of surveys.
The paper looks at demographic, behavioral and attitudinal differences among respondents across a continuum that starts with CATI then goes from single online panel membership to several categories of multiple panel membership.
One unsurprising finding is that online often produces different results than CATI. The paper argues that propensity weighting may be able to solve this, although I should note that the AMR Web site references a partnership of some sort with Harris Interactive, center of propensity weighting. That said, CATI/Online differences are not really the theme of the paper.
The central theme is the effect of single vs. multiple panel membership and the findings are striking. For example:
- Single panel members are more likely to be older, male, educated, working full time and owning a home.
- Single panel members gave higher ratings on bank reputation and lower ratings on airlines.
- Multiple panel members drink less wine and invest less, but are more likely to smoke and own pets. They also are more likely to read magazines and own high-end technology products.
At the first panels conference in Budapest a paper from Survey Sampling, Inc. reported that attitudes on critical questions like propensity to buy a product varied by survey taking experience. The Fine study provides another perspective that would seem to support the SSI finding: people who belong to multiple panels and do lots of surveys are different attitudinally and behaviorally from those who belong to just one panel and do fewer surveys.
Unfortunately, Fine and his colleagues don't take us to the next step and help us to understand how we can protect or correct. They point out, for example, that even setting limits on survey participation with your panel provider doesn't help because your panel provider does not know how many surveys the member may have done with other panel companies. And, of course, there is no obvious weighting magic that evens all of this out. The paper ends with a noble but nonetheless frustrating call for more research.
So what's a researcher to do? Well, at a minimum we should be routinely asking online respondents about the number of online surveys they are doing. At least we can start to measure the problem in our own work. This at least will allow us to understand a little more about our online samples and perhaps help us to understand any anomalies we see in our data. And we should be pressuring our panel suppliers to do more to identify multi-panel members and at least flag them if not eliminate them from their panels. But it seems that a clear, effective solution is still a ways off.
We live in interesting times. Perhaps, too interesting.