Online samples: Paying attention to the important stuff

Those of you who routinely prowl the MRX blogosphere may have noticed a recent uptick in worries about speeders, fraudulent respondents, and other undesirables in online surveys. None of this is new. These concerns first surfaced over a decade ago, and I admit to being among those working the worry beads. An awful lot has changed over the last 10 years, but it seems that not everyone has been paying attention. Muchqu

Yesterday, my buddy Melanie Courtright at Research Now reached her I’m-not-going-to-take-it-any-more moment and posted an overview of what are now widely accepted practices for building and maintaining online sample quality. Most this is not new, nor is it unique to Research Now. If you are really worried about this stuff, choose your online sample supplier carefully and sleep at night. ESOMAR has for many years provided advice on how to do this (select a supplier, not sleep at night).

Of course, none of this guarantees that you are not going to have some speeders sneak into your survey who will skip questions, answer randomly, choose non-substantive answers (DK or NA), etc. Your questionnaire could be encouraging that behavior, but let’s assume you have a great, respondent friendly questionnaire. Then the question is, “Does speeding with its attendant data problems matter?”  The answer is pretty much, “No.” It may offend our sensibilities but the likely impact on findings is negligible. Partly that’s because we seldom get a large enough proportion of these “bad respondents” to significantly impact our results, but also because their response patterns generally are random rather than biased. See Robert Greszki’s recent article in Public Opinion Quarterly for a good discussion and example.

The second iteration of the ARF's Foundations of Quality initiative also looked at this issue in considerable detail and offered these three conclusions:

  • For all the energy expended on identifying those with low quality responses, they may make less of a difference in results than focusing more clearly on what makes for a good sample provider.
  • Further, when sub-optimal behaviors occur at higher rates, they generally indicate a poorly designed survey – some combination of too long, too boring, or too difficult for the intended respondents. Most respondents do not enter a survey with the intention of not paying attention or answering questions in sub-optimal ways, but start to act that way as a result of the situation they find themselves in.
  • Deselecting more respondents who exhibit sub-optimal behaviors may increase bias in our samples by reducing diversity, making the sample less like the intended population.

The irony in all of this is that the potential harm caused by a few poor performing respondents pales in comparison to the risk of using samples of people who have volunteered to do surveys online, especially in countries with low Internet penetration. There is the widely-accepted belief in the magical properties of demographic quotas to create representative samples of almost any target population. No doubt that works sometimes, but we also know that, depending on the survey topic, other characteristics are needed to select a proper sample. What characteristics and when to use them remain open questions. Few online sample suppliers have proven solutions and outside of academia little effort is being put to developing one.


Representivity ain't what it used to be

I am on my way back from ESOMAR APAC in Singapore where I gave a short presentation with the title, “What you need to know about online panels.” Part of the presentation was about the evolution of a set of widely-accepted QA practices that while standard in the US and much of Europe are sometimes missing in countries where online is still a maturing methodology. The other part was about the challenge of creating representative online samples, especially in countries with relatively low Internet penetration. How can you learn anything meaningful about the broader market in a country like India with only about 20% Internet penetration using an online panel that has only a tiny fraction of that 20%?    Random

At the same time I have tried to keep an eye on what has been happening at the AAPOR conference in Florida this past week and am delighted to see the amount of attention that online nonprobability sampling is getting from academics and what the Europeans like to call “social policy” researchers. Their business is all about getting precise estimates out of representative samples, something that thus far has mostly eluded online researchers despite its increasing dominance in the research industry as a whole.

The conversation in Singapore was much different where solving this problem seems less important. The cynical view is that there is little impetus to do better because clients aren’t willing to spend the money it takes to get more accurate data.  The more generous view is that market researchers paint with a much broader brush. Trends over time and large differences in estimates are more important than really precise numbers; research outcomes are an important part of the decision making process, but not the only part.

That said, it sill seems to be that we have a responsibility to understand just how soft our numbers might be, where the biases are, and what all of that implies for how results are used. The obvious danger is in ascribing a precision to our results that just is disconnected from reality. There already is way too much of that and not just with online. Social media, mobile, big data, text analytics, neuroscience—all of it is being oversold. And the thing is, when you talk to people one on one, they know it.

I subscribe to the idea that our future is one in which data will be plentiful and cheap. It also will almost always be imperfect, every bit as imperfect as online today. The most important skill for market researchers to develop is how to learn from imperfect data, a task that starts by recognizing those imperfections and then figuring out how to deal with them rather than pretending they don’t exist.


Online Sampling Again

Last week two posts on the GreenBook Blog, one by Scott Weinberg and a response by Ron Sellers, bemoaned the quality of online research and especially its sampling. And who can blame them? All of us, including me, have been known to go a little Howard Beale on this issue from time to time. BealeWe all know the familiar villains—evil suppliers, dumb buyers, margin-obsessed managers, tight-fisted clients, and so on. I was reminded of a quote from an ancient text (circa 1958) that a friend sent me a few months back:

Samples are like medicine. They can be harmful when they are taken carelessly or without adequate knowledge of their effects. We may use their results with confidence if the applications are made with due restraint. It is foolish to avoid or discard them because someone else has misused them and suffered the predictable consequences of his folly. Every good sample should have a proper label with instructions about its use.

Every trained researcher has an idealized notion of what constitutes a good quality sample. Every experienced researcher understands that the real world imposes constraints, that any one who says you can have it all—fast, cheap, and high quality—is selling snake oil. So we make tradeoffs and that’s ok, as long as we have “adequate knowledge of their effects.”

It helps to be an informed buyer, and ESOMAR has for about ten years now offered some version of their 28 Questions to Help Buyers of Online Samples to help. It’s an excellent resource, often overlooked. More recently, ESOMAR has teamed up with the Global Research Business Network to develop the soon-to-be-released ESOMAR/GRBN Guideline on Online Sample Quality (here I disclose that I was part of the project team). In the meantime, there is an early draft here. Its most prominent feature, IMO, is its insistence on transparency so that sample buyers are at least informed about exactly what they are getting and in as much detail as they can stand.

Granted, that’s not the same as “a proper label with instructions about its use.” That still is a job left to the researcher. If you can’t or won’t do that, well, shame on you. Getting depressed about it is not an option. Or, as another of the ancients has said, “You're either part of the problem or part of the solution.”


AAPOR gets it wrong

Unless you’ve been on vacation the last couple of weeks chances are that you have heard that The New York Times CBS News have begun using the YouGov online panel in the models they use to forecast US election results, part of a change in their longstanding policy of using only data from probability-based samples in their news stories. On Friday, the American Association for Public Opinion Research (AAPOR) issued a statement essentially condemning the Times (and its polling partner, CBS News) for “rushing to embrace new approaches without an adequate understanding of the limits of these nascent methodologies” and for a lack of “a strong framework of transparency, full disclosure, and explicit standards.”

I have been a member of AAPOR for almost 30 years, served on its Executive Council, chaired or co-chaired two recent task force reports, and, in the interest of transparency, note that I unsuccessfully ran for president of the organization in 2011. AAPOR and the values it embraces have been and continue be at the center of my own beliefs about what constitutes good survey research. That’s why I find this latest action to be so disappointing.

The use of non-probability online panels in electoral polling is hardly “new” or “nascent.” We have well over a decade of experience showing that with appropriate adjustments these polls are just as reliable as those relying on probability sampling, which also require  adjustment. Or, to quote Humphrey Taylor,

The issue we address with both our online and our telephone polls is not whether the raw data are a reliable cross-section (we know they are not) but whether we understand the biases well enough to be able to correct them and make the weighted data representative. Both telephone polls and online polls should be judged by the reliability of their weighted data.

There is a substantial literature stretching back to the 2000 elections showing that with the proper adjustments polls using online panels can be every bit as accurate as those using standard RDD samples. But we need look no further than the 2012 presidential election and the data compiled by Nate Silver:

Silver
 

I don’t deny there is an alarming amount of online research that is just plain bad (sampling being only part of the problem) and should never be published or taken seriously. But, as the AAPOR Task Force on Non-Probability Sampling (which I co-chaired) points out, there are a variety of sampling methods being used, some are much better than others, and those that rely on complex sample matching algorithms (such as that used by YouGov) are especially promising. The details of YouGov’s methodology have been widely shared, including at AAPOR conferences and in peer-reviewed journals. This is not a black box.

On the issue of transparency, AAPOR’s critique of the Times is both justified and ironic. The Times surely must have realized just how big a stir their decision would create. Yet they have done an exceptionally poor job of describing it and disclosing the details of the methodologies they are now willing to accept and the specific information they will routinely publish about them. Shame on them.

But there also is an irony in AAPOR taking them to task on this. Despite the Association’s longstanding adherence to transparency as a core value they have yet to articulate a full set of standards for reporting on results from online research, a methodology is that is now almost two decades old and increasingly the first choice of researchers worldwide. Their statement implies that such standards are forthcoming, but it’s hard to see how one can take the Times to task for not adhering to them.

My own belief is that this is the first shoe to drop. Others are sure to follow. And, I expect, deep down most AAPORites know it. AAPOR is powerless to stop it and I wish they would cease trying.

I have long felt that AAPOR, which includes among its members many of the finest survey methodologists in the world, would take a leadership role here and do what it can do better than any other association on the planet: focus on adding much needed rigor to online research. But, to use a political metaphor, AAPOR has positioned itself on the wrong side of history. Rather than deny the future, I wish they would focus on helping their members, along with the rest of us, transition to it.

 

 


A bad survey or no survey at all?

For a whole lot of reasons that I won’t go into online privacy suddenly is front and center, not just in the research industry, but in the popular press as well. The central message is that people are “concerned,” but about what exactly and by how much, well the answers there are all over the map. One of the few clear things about this whole debate, if that’s what it is, is the ongoing misuse of online surveys to describe what is going on.

I am hard pressed to think of anything sillier than using online surveys to help us understand attitudes about online privacy. Think about it. You have a sample of people who have signed up to share their personal behavior, attitudes, and beliefs in online surveys. What in God’s name could possibly make us think that these online extroverts, this sliver of the population, could possibly represent the range of attitudes about online privacy among “consumers” as generally alleged?  MisinformationIf ever there was an example of an issue where online is not fit to purpose, this is it. Yet these surveys are churned out weekly, generally to serve the commercial interests of whoever commissioned them, and often widely cited as some version of the truth.

To quote H. L. Mencken, “A newspaper is a device for making the ignorant more ignorant and the crazy crazier.” Sometimes it feels like online surveys serve a similar purpose.