Innovate or ?
"Just trust us."

AAPOR gets it wrong

Unless you’ve been on vacation the last couple of weeks chances are that you have heard that The New York Times CBS News have begun using the YouGov online panel in the models they use to forecast US election results, part of a change in their longstanding policy of using only data from probability-based samples in their news stories. On Friday, the American Association for Public Opinion Research (AAPOR) issued a statement essentially condemning the Times (and its polling partner, CBS News) for “rushing to embrace new approaches without an adequate understanding of the limits of these nascent methodologies” and for a lack of “a strong framework of transparency, full disclosure, and explicit standards.”

I have been a member of AAPOR for almost 30 years, served on its Executive Council, chaired or co-chaired two recent task force reports, and, in the interest of transparency, note that I unsuccessfully ran for president of the organization in 2011. AAPOR and the values it embraces have been and continue be at the center of my own beliefs about what constitutes good survey research. That’s why I find this latest action to be so disappointing.

The use of non-probability online panels in electoral polling is hardly “new” or “nascent.” We have well over a decade of experience showing that with appropriate adjustments these polls are just as reliable as those relying on probability sampling, which also require  adjustment. Or, to quote Humphrey Taylor,

The issue we address with both our online and our telephone polls is not whether the raw data are a reliable cross-section (we know they are not) but whether we understand the biases well enough to be able to correct them and make the weighted data representative. Both telephone polls and online polls should be judged by the reliability of their weighted data.

There is a substantial literature stretching back to the 2000 elections showing that with the proper adjustments polls using online panels can be every bit as accurate as those using standard RDD samples. But we need look no further than the 2012 presidential election and the data compiled by Nate Silver:

Silver
 

I don’t deny there is an alarming amount of online research that is just plain bad (sampling being only part of the problem) and should never be published or taken seriously. But, as the AAPOR Task Force on Non-Probability Sampling (which I co-chaired) points out, there are a variety of sampling methods being used, some are much better than others, and those that rely on complex sample matching algorithms (such as that used by YouGov) are especially promising. The details of YouGov’s methodology have been widely shared, including at AAPOR conferences and in peer-reviewed journals. This is not a black box.

On the issue of transparency, AAPOR’s critique of the Times is both justified and ironic. The Times surely must have realized just how big a stir their decision would create. Yet they have done an exceptionally poor job of describing it and disclosing the details of the methodologies they are now willing to accept and the specific information they will routinely publish about them. Shame on them.

But there also is an irony in AAPOR taking them to task on this. Despite the Association’s longstanding adherence to transparency as a core value they have yet to articulate a full set of standards for reporting on results from online research, a methodology is that is now almost two decades old and increasingly the first choice of researchers worldwide. Their statement implies that such standards are forthcoming, but it’s hard to see how one can take the Times to task for not adhering to them.

My own belief is that this is the first shoe to drop. Others are sure to follow. And, I expect, deep down most AAPORites know it. AAPOR is powerless to stop it and I wish they would cease trying.

I have long felt that AAPOR, which includes among its members many of the finest survey methodologists in the world, would take a leadership role here and do what it can do better than any other association on the planet: focus on adding much needed rigor to online research. But, to use a political metaphor, AAPOR has positioned itself on the wrong side of history. Rather than deny the future, I wish they would focus on helping their members, along with the rest of us, transition to it.

 

 

Comments