Getting straight on response rates
January 31, 2011
AAPOR's Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys has long been the bible for survey researchers interested in systematically tracking response and nonresponse in surveys and summarizing those outcomes in standardized ways that help us judge the strengths and weaknesses of survey results. The first edition, published in 1998, built on earlier work by CASRO (1982), a document that seems to have disappeared into the mists of time. Since then the SD Committee within AAPOR, currently chaired by Tom Smith, has issued a continuing series of updates as new practices emerge and methods change.
A 2011 revision has just been released and a significant part of this revision is focused on Internet surveys. In doing so it makes an important point that is often overlooked in practice. That point is this: if we want to calculate a response rate for an Internet survey that uses an online panel, it's not enough to track the response of the sample drawn from that panel; we also must factor in the response to the panel's recruitment effort(s). This is relatively straightforward for the small number of panels that exclusively do probability-based recruitment (e.g., The Knowledge Panel or LISS Panel). But the vast majority of research done in the US and worldwide uses panels that are not recruited using probability methods. The recruitment methods for these panels vary widely but in almost all cases it's impossible to know with certainty how many people received or saw an invitation. And so, the denominator in the response rate calculation is unknown and therefore no response rate can be computed. (The probability of selection is also unknown which makes computation of weights a problem as well.)
For these reasons a "response rate" applied to nonprobability panels is uncalculable and inappropriate, unless the term is carefully redefined to mean something very different from its meaning in traditional methodologies. These also are the reasons why the two ISO standards covering market, opinion and social research (20252 and 26362) reserve the term "response rate" for probability-based methods and promote the use of the term "participation rate" for access panels, it being defined as "the number of respondents who have provided a usable response divided by the total number of initial personal initiation requesting participation." And, of course, all of this is also getting much more complicated as we increasingly move away from "classic" panels toward designs that look a lot like expanded versions of river sampling with complex routing and even repurposing of cooperative respondents.
To my mind all of this is symptomatic of a much larger problem, namely, the use of a set of tools and techniques developed for one sampling paradigm (probability) to evaluate results produced under a very different sampling paradigm (nonprobability). This should not surprise us. We understand the former pretty well, the latter hardly at all. But therein lies an opportunity, if we can leverage it.