Previous month:
December 2010
Next month:
February 2011

Posts from January 2011

Getting straight on response rates

AAPOR's Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys has long been the bible for survey researchers interested in systematically tracking response and nonresponse in surveys and summarizing those outcomes in standardized ways that help us judge the strengths and weaknesses of survey results. The first edition, published in 1998, built on earlier work by CASRO (1982), a document that seems to have disappeared into the mists of time. Since then the SD Committee within AAPOR, currently chaired by Tom Smith, has issued a continuing series of updates as new practices emerge and methods change.

A 2011 revision has just been released and a significant part of this revision is focused on Internet surveys. In doing so it makes an important point that is often overlooked in practice. That point is this: if we want to calculate a response rate for an Internet survey that uses an online panel, it's not enough to track the response of the sample drawn from that panel; we also must factor in the response to the panel's recruitment effort(s). This is relatively straightforward for the small number of panels that exclusively do probability-based recruitment (e.g., The Knowledge Panel or LISS Panel). But the vast majority of research done in the US and worldwide uses panels that are not recruited using probability methods. The recruitment methods for these panels vary widely but in almost all cases it's impossible to know with certainty how many people received or saw an invitation. And so, the denominator in the response rate calculation is unknown and therefore no response rate can be computed. (The probability of selection is also unknown which makes computation of weights a problem as well.)

For these reasons a "response rate" applied to nonprobability panels is uncalculable and inappropriate, unless the term is carefully redefined to mean something very different from its meaning in traditional methodologies. These also are the reasons why the two ISO standards covering market, opinion and social research (20252 and 26362) reserve the term "response rate" for probability-based methods and promote the use of the term "participation rate" for access panels, it being defined as "the number of respondents who have provided a usable response divided by the total number of initial personal initiation requesting participation." And, of course, all of this is also getting much more complicated as we increasingly move away from "classic" panels toward designs that look a lot like expanded versions of river sampling with complex routing and even repurposing of cooperative respondents.

To my mind all of this is symptomatic of a much larger problem, namely, the use of a set of tools and techniques developed for one sampling paradigm (probability) to evaluate results produced under a very different sampling paradigm (nonprobability). This should not surprise us. We understand the former pretty well, the latter hardly at all. But therein lies an opportunity, if we can leverage it.


"Twitter is outdirectedness cubed."

My wife is a borderline obsessive reader of the New York Times.  When she gets behind certain sections of it pile up around the house.  But in the end all of it gets read.  Some of it she clips and puts on my desk where it also piles up but in the end all of it, too, gets read.

Hence my late discovery of this interesting essay from July with the title, "I Tweet, Therefore I Am."  The essay refrences a forthcoming book by Sherry Turkle at MIT.  In it Turkle talks about how  posts to Facebook or Twitter gradually take on the character of a performance, an identify constructed for consumption by other people.  Not really your true self.  She reminds us of how the sociological masterwork The Lonely Crowd described the transformation of Amercian culture from inner-directed to outer-directed and how social media has sped that up. Reisman   Hence the title of this post, a quote from Turkle.

As any pioneer on the bleeding edge of the NewMR will tell you, we have embraced the basic tenant of behavioral economics that says people don't always do what they say they will do or can even explain after the fact why they did what they did.  This makes working with survey data a bit challenging, to say the least.  But by the same token, if people we listen to in social media are not true selfs but selfs constructed for our consumption, it raises some pretty significant challenges for working with social media information.

Or, to put it in terms a survey geek can understand we might say: Twitter is social desireaiblity on steroids.


Six of one . . .

We do a large amount of survey work with physicians and virtually all of it relies on online panels. Not that long ago we had a client who insisted on our using mail to recruit physicians to Web surveys, partially on the premise that it would produce a more representative sample. That client has since backed away for that insistence although the reason probably has more to do with turnaround and perhaps cost than sample quality. Now a colleague who works with that client has asked about the pros and cons of mail recruit to Web versus use of an online panel. It's a hard question.

There are about 850,000 licensed physicians in the US. A first rate online physician panel will generally have enrolled about 150,000 of them, or about 18% of the total. Putting the invitation to join in front of most of those 850,000 is doable and it probably is safe to assume that most companies specializing in physician sample have done just that. So in statistical terms, the response rate to the recruit stage of a survey using the panel is 18%. Response rates for the physician panels we work with are typically in the 25%-30% range. The overall response rate is the product of the recruitment response rate and the individual survey response rate or around 5%. Back when we were mailing to samples of physicians our typical response rate was also 5%.

So which is the better sample? To answer that question we need to know a lot more than we now know. Both samples have huge amounts of nonresponse (95%) and unless we know a good deal about the factors that drive that nonresponse and their possible interaction with the specific study topic it's impossible to say whether one sample is better (i.e., more representative than the other. The time honored way to answer that question is to select a small sample of nonresponders and work it aggressively to try to uncover the dynamics of nonresponse. That's a bit more complicated in the case of the online panel because there are two stages of nonresponse—original recruit to the panel and the individual survey—whereas the mail method has just the one stage of the individual survey. Just as with consumer panels, we understand precious little about the attitudinal and behavioral characteristics that drive some people to join an online panel while the vast majority of others do not.

I can imagine a methodological study that would shed some light on all of this but relatively little of this kind of work is done because working with physicians is so expensive, honoraria easily $100 or more. That's a real shame because we should know more about these dynamics.

So the Pharma industry marches on relying almost exclusively on online panelists about whom we know relatively little. To their credit, these respondents are very practiced at doing very complex surveys (as are panel respondents generally) loaded with difficult exercises (such as discrete choice or marketing message review) that may take 45 minutes or more to complete, albeit for incentives that are 10 or more times what we would pay a consumer. And there is little evidence to argue that doing so will produce different results than what we would get from a classic probability sample from a high quality frame.


Where is the insight?

Just prior to the holidays the folks at the Harvard Business Review released the results of a survey that asked 2,100 companies about their use of social media. This finding is especially telling:

Nearly two-thirds of the 2,100 companies who participated said they are either currently using social media channels or have social media plans in the works. But many still say social media is an experiment, as they try to understand how to best use the different channels, gauge their effectiveness, and integrate social media into their strategy.

Despite the vast potential social media brings, many companies seem focused on social media activity primarily as a one-way promotional channel,  and have yet to capitalize on the ability to not only listen to, but analyze, consumer conversations and turn the information into insights that impact the bottom line.

The methodology of this particular study is underwhelming but the results are largely consistent with other things I see and hear, some of which I blogged about back in August. 

Were a modern day Rip Van Winkle to awake after 20 years with an interest in MR and the only sources available to him were conference proceedings and Twitter he probably would quickly conclude that MR=Social Media Analysis.  But outside of that echo chamber it's a different world. Despite all of the hand waving about social media, the NewMR, Web 2.0, etc. clients, for the most part, remain unconvinced. The latest estimates from Inside Research suggest that in terms of dollars spent social media research is barely above the noise level in an $8 billion US industry. For the moment, at least, it seems that we are losing the argument where it counts most.