Previous month:
February 2007
Next month:
April 2007

Posts from March 2007

Worse Than Web?

Arguably the most compelling story of the last ten years of the MR industry has been the introduction and then dramatic growth of online interviewing. Virtually non-existent 10 years ago, online today accounts for around $1.6 billion of the research in the US alone. Despite that growth, the often-forecasted demise of telephone interviewing has yet to occur.  At least two factors may be at work, both of which are sampling related. The first is the need for high quality customer satisfaction research, especially of traditional businesses who generally do not have customer email addresses and for whom access panels generally cannot provide representative samples of their customers. The second is emerging concerns about access panel data quality as increasingly evidenced by high levels of multiple panel membership, heavy survey taking by panel members, the emergence of fraudulent respondents who misrepresent their qualifications to panels and surveys, and worrisome levels of survey satisficing (a.k.a inattentives) in online surveys. 

Given the above it is fair to ask whether telephone surveys, despite the plunge in response rates, might nonetheless be capable of producing better quality data than online for a significant number of MR applications. Key to this  is emerging academic research that increasingly views nonresponse bias on a par with response rate as an important measure of survey data quality.  There is evidence, for example, that data quality differences between studies with substantially different response rates may not be as great as assumed. We are beginning to understand more clearly how the emergence of cell phone only households is likely to impact telephone survey data. And there may be validation techniques that allow us to better understand the levels of bias created by high nonresponse and how to correct for them. 

Where all of this goes is not at all clear.  But what is clear is that our clients expect us to work these things through for them and deliver the best possible data for solving whatever problem they have hired us to help them solve.


Special Issue on Nonresponse

At the end of 2006 Public Opinion Quarterly published a special issue devoted to nonresponse in surveys.   There is no denying that declining respondent cooperation is the most serious problem we face as an industry.  Key government face-to-face surveys like the Current Population Survey and the National Health Interview Survey are still getting north of 85 percent, but the latter is losing almost a point of response a year.  A 2005 POQ  article on nonresponse in a key academic telephone survey, the University of Michigan's Survey of Consumer Attitudes, reported that it was losing a point and half of response a year.  A survey that once routinely got 70 percent plus is now struggling to get 40 percent.  In Market Research, the situation is even more dire with response rates hovering around 10 percent or worse.

There are six articles in the issue and I'm not going to try to summarize them all here.  I instead want to focus on two related themes.  The first is that response rate may not be as good an indicator of survey quality as we once thought.  For example, Scott Keeter and his colleagues at Pew conducted two identical RDD surveys. One they worked hard and got a 50 percent response rate.  The other they worked less hard and got a 25 percent response rate.  When they compared 84 measures of attitudes and behavior across the the two surveys they found only a handful of significant differences. 

The other is the increasing focus on nonresponse bias.  Simply put this tries to get at how the survey estimates might differ if we were able to interview everyone.  Put another way, it asks whether those who did not respond are different from those who did in some important way that might change our results.  Conceptually, this is a very appealing argument, but measuring nonresponse bias is difficult because we don't know much of anything about the people who did not respond. 

Post stratification adjustments (a.k.a. weighting) attempts to get at this by bringing the demographics of our completed interviews in line with those of the population.  It assumes that people's attitudes or behaviors measured in the survey are strongly related to demographics.  But is that always the case?  Probably not.  To do this effectively we need to know a lot more about nonrespondents and about those characteristics that are mostly closely associated with whatever we are measuring in our survey.  No easy task.

And one more piece of bad news.  All of those things we have learned in methods studies that help us get higher response rates (e.g., offering different modes or creating topic salience) may be counterproductive precisely because they can create nonresponse bias.  For example, I might want to disclose the topic of my survey as a way to create interest and therefore higher participation but this may result in people who feel positively about the topic participating and those who find it uninteresting may decide against responding.  When that happens, I have nonresponse bias.

Let me say the obvious: these are really tough issues.  But at least there is work going on out there that is trying to help us see through what really is a major challenge if not downright crisis for the survey profession.  It is going to be very interesting to watch and learn.


The Future of Health Surveys

I spent the weekend outside Atlanta at the Health Survey Research Methods Conference.  This is an episodic (the last one was in 2003) invitation-only gathering of about 80 government health survey practitioners and academic survey methodologists. We were all pretty much locked in a hotel for three days with nothing but survey talk and fattening foods.  The proceedings from the last conference can be found here http://www.cdc.gov/nchs/data/misc/proceedings_hsrm2004.pdf . The proceedings from this one will be published in a few months but in the meantime the main highlights:

  1. There is burgeoning field of surveys around disasters, most recently Katrina and the World Trade Center attacks.  Some of it is very short-term tactical: what's going on, where are the needs the greatest, and what needs to be done to protect public health in the immediate aftermath?  And some of it is long-term, especially looking at mental health impacts.
  2. More and more federal surveys are collecting biomarkers.  This is everything from saliva and hair samples to vaginal swabs.  There are, of course, lots of issues around logistics and respondent cooperation but it is fascinating stuff.  The standing joke is that soon we won't bother to interview people; we'll just collect biomarkers and figure it all out back in the lab.
  3. I was a discussant in a session that focused on tradeoffs in survey design.  The particular thing I latched onto there was the increasing focus on nonresponse bias as a better measure of survey quality than the traditional response rate.  I need to do a separate post on the issue, but the current trends are toward the development of better measures of survey quality and survey error.  Some very smart people are focused on this and I think the research over the next few years will be very interesting.

So it was a weekend well spent, although I'm not sure my wife would agree.