Plus ca change
March 23, 2010
In the current issue of Research Business Report Bob Lederer muses on one of his favorite topics, online panel quality, and, at the risk of oversimplifying, seems to say that after lots of industry-wide soul searching it's now time for some action. He concludes by saying, "I suspect that 2010 will be all about tests and adoption of solutions that breathe reliability, replicability, and added value in to an infant (decade-old) research mode that, to those paying attention, had deeply serious shortcomings."
Had? If we have learned nothing else over the last five years it ought to be this: the online panel model is deeply flawed in both theory and practice. Like my allergies, they can be controlled but they never can be fixed.
Let's talk theory first. Well, there is no theory. The arguments are all empirical: it works. The methodology has been legitimized largely by anecdote and the endless repetition of, "It works." No underlying scientific principles have been enunciated or testable theories proposed. Without theory we can never be sure when it will work and when it won't. A tiny handful of people recognize the flaws and are trying to apply a broader set of techniques for working with nonprobability samples that have been developed in disciplines outside of survey research. I hope they come up with something. But the vast majority of practitioners in MR treat panel sample just like it was a probability sample drawn from a high coverage frame. It's not. It's a tiny slice of the population that we just don't understand very well at all. The notion that they are representative of the broader population is just plain silly.
And how about practice? For at least five years we have been talking about four main problems:
- People sometimes create false identities when they join panels and misrepresent themselves in order to maximize survey opportunities.
- People sometimes rush through surveys and don't make an honest effort to answer thoughtfully.
- People will sometimes take the same survey more than once, or worse yet, develop bots that simulate a respondent and take the same survey many times over.
- The experience of being on a panel and taking lots of surveys over time can change how people respond.
We are told that there are solutions for all of these, but too often the solutions themselves just introduce more problems. For example, it is rapidly becoming a standard for a panel company to "validate" a panelist's identify by bumping his or her particulars up against one of the big marketing databases like Acxiom or Experion. But these databases fall well short of universal coverage of the population and tend to miss people who don't have credit cards or bank accounts. And so real people are rejected and more bias introduced into the panel. Worse yet, solutions that are proposed are then ignored. For at least the last two years we have known that simply collecting a respondent's IP address and easily-retrieved information from his or her browser can help us identify duplicates. Yet in the past week I've seen two studies with significant duplication in samples from well-established panel companies that claim they use digital fingerprinting to guard against just this sort of problem.
So I think the notion that the online panel paradigm can be "fixed" is fanciful. The essential problem is that the goal our clients have set for us (faster and cheaper) is fundamentally at odds with what we pretend to deliver (accuracy and validity). And it didn't start with online. MR has been cutting corners for decades to make it faster and cheaper. Exhibit A: Quota sampling. We work in a competitive environment where the lower price wins more often than not and the buyer doesn't always understand what he's buying. Another lesson of the last five years: that dynamic is not going to change any time soon.
So regardless of their problems, online panels are not going away. Bob hopes that in 2010 we will see more tests and the actual adoption of some of the solutions now on the table. So do I. But there is a bigger challenge that I see no sign of the industry stepping up to. Here I quote Lyndon Johnson who once said, "Boys, I don't know much, but I know the difference between chicken shit and chicken salad." We might take that to heart. What's been missing so far in all of the discussion of panel quality is a frank admission of what online is not. The panel quality solutions are fine, but they don't replace the need for us to do a much better job of conditioning the conclusions we draw and the advice we give our clients on the quality of the evidence at hand.