Previous month:
April 2007
Next month:
June 2007

Posts from May 2007

Telephone or Mail?

We recently fielded a query from a client who has a long running, general population telephone survey but now faces some internal pressures to consider mail.  His request to us was to supply him with some information to help with that decision.  Dan Zahs and I put our heads together and produced a short overview of the issues.  You can see that document here.

Over about the last 20 years telephone has generally been preferred over mail for general population surveys with mail's most attractive feature being its lower cost.  Of course, with deterioration of the RDD sample frame and continuing concerns about Web that could change.  Mail surveys could become more attractive, although the frame available to do really good, representative mail surveys is still being evaluated.  It's all in the overview.

But as of this writing this geek still likes telephone.


Sample Blending

This is a euphemism coined by the good folks at Lightspeed Research for combining respondents from multiple panels into a single sample. This sometimes looks attractive when a single panel can't deliver enough respondents either because of low incidence or a small geographic study area.  Sometimes "sample blending" is a deliberate design decision and, sadly, sometimes it's a decision that a panel provider makes without bothering to consult their client.  One hopes the latter cases are few and far between.

Back in 2006 Lightspeed Research did a study to to identify the impacts, if any, of using multiple panels.  They used their own panel plus two others unnamed.  They interviewed 600 respondents per panel and in each case used quotas to get census-balanced demographics from each panel.  The questionnaire was mostly standard CPG brand awareness and use questions.  They found some differences, but nothing dramatic.  Most differences where in the single digits.  They conclude "that behaviorally LSR and the other two sample sources used for this test are similar."  To quote the Lightspeed person who gave me the report, "We did the study and proved that it's OK."

Well, I'm not so sure.  One problem with the study is that it does not appear they did anything to try to dedup across panels.  We know that multiple panel membership is fairly common, 30% to 60% depending on the panel, and that people who belong to multiple panels also tend to be frequent participants.  I'm not about to try to compute the likelihood of panel overlap in a study with and N of 600 but it just seems that some effort ought to be made to deal with this problem.

More importantly, I think most of us believe that panel recruitment and management practices have a major impact on panel composition and on individual samples drawn from those panels.  In other words, we would expect every panel to have some bias based on recruitment sources and management practices, and understanding that bias is important.  For example, blending panels with differing SES biases might produce more differences in response patterns than using panels with more or less the same bias.

I don't imagine this problem going away anytime soon.  Clients and researchers alike have developed a fondness for studies that are hard to do in any mode but Web.  Those studies are generally less expensive than other modes, have faster turnaround times, and use questionnaires that work best in a visual, self-administered format.  And staying with large, national or multi-state study areas is less and less practical.    Doing these studies well and producing results that clients can safely use to make their business decisions is going to require a more sophisticated approach than what Lightspeed has done in this study.


More on Cell-Only Households

Less than a week ago I posted an item on this topic that more or less concluded that we don't need to worry yet.  Then I went to AAPOR and my nervousness on this issue is up a notch, maybe two.

The talk of the conference was a new report from CDC saying that 15.8 percent of US households do not have a landline and most of those use a wireless phone.  (Here is the link to the report courtesy of Dan Zahs.) So the cell-only household rate is now estimated at 12.8 Percent.  The report also reinforced earlier findings about the differences in reports of health behaviors in cell-only vs. landline households.  These findings were discussed in formal sessions for almost two full days along with considerable attention to the practical issues of doing research with this population.  Some highlights:

  • While you can't call cell phones with an automated dialer you can nonetheless call them and interview them.  In the past there has been a great deal of concern about the charges incurred by the respondent, but with the evolution in pricing plans toward large blocks of minutes cell users seem to be less concerned about it.  Nonetheless, one probably should be prepared to compensate them if they ask.
  • There are sampling frames that one can buy from both SSI and Marketing Systems Group, but these frames are generally not as clean as what you get with traditional RDD frames.  There tend to be lots of non-working numbers and very poor matches between expected geographic location (based on exchange) and actual location. 
  • For both of the above reasons, doing research with these folks is expensive, as little as three and as much as five times more expensive than RDD and the response rates are no better.  And, of course, you get all kinds of cell phone users, most of whom also have landlines.
  • There also is some experimentation around using a Web panel to identify cell-only households and then trying to convince those respondents to do a telephone interview.  Getting them to the telephone avoids a potential mode effect.  Of course, the problem with this strategy is that you are then mixing apples with oranges and weighting is very challenging.

I also had the opportunity to talk at some length with two people who are spending a lot of their time tracking all of this: Stephen Blumberg from CDC and Clyde Tucker from BLS.  Clyde's "night job" is doing electoral forecasting for the TV networks.  To both I posed the key question: given what we know so far about the differences between cell-only and other households is the problem serious enough to have a significant impact on the overall estimates?  Put another way, if we weight by age, gender and maybe an SES indicator like education, does the problem go away?  They both had the same response.  First, they said it depends on the questions.  Clearly, health behaviors are an issue while the most recent research on political polling suggests that  political attitudes and behavior are less affected.  Second, they argued that this is a very public issue and at some point not including cell-only households raises issues about the credibility of the work, even though empirically the impact may be small.  Clyde believes, for example, that by the time the 2008 election rolls around the cell-only rate will be very close to 20 percent and therefore wonders how one can possibly not include them in political polls.


Those Troublesome Cell-Only Households

Over the last five or six years there has been increasing hand wringing about the emerging phenomenon of households that have only cell phones and no landline telephone.  The problem is that those cell-only households fall outside of the standard RDD sampling frame.  You sometimes see this referred to as "frame deterioration."  As the number of cell only households increases so does coverage error in any RDD sample.  And, of course, the phenomenon is heavily concentrated among 18-24 year olds, the age group that already is tough to get in telephone surveys.  Other demographic differences amount to lower SES.   When it was five percent of the population no one worried too much, but now it's north of 10 percent and growing fast.

A number of people on the academic and government side of the business have been tracking this phenomenon and trying to gauge the seriousness of its impact.   A series of studies at the National Center for Health Statistics (NCHS) have suggested that the cell only group may be different on a number of health issues including smoking and lack of health insurance, although those characteristics are also associated with younger and lower SES populations.  One of the best studies to date is by Mike Brick, Sarah Dipko, Stanley Presser, Clyde Tucker, and Yangyang Yuam and was published in POQ (Vol 70:780-793) as "Nonresponse Bias in a Dual Frame Sample of Cell and Landline Numbers."  The study evaluated a sample design that had both a standard landline and a cellphone sample, a somewhat cumbersome and expensive way to try to mitigate the impact of cell only households.  Their conclusion: ". . . the coverage bias due to excluding cell-only households was not substantial in 2004."   This finding is consistent with other studies done in about the same time period.  In one particularly compelling study Scott Keeter at Pew showed that any differences in political preferences due to excluding cell-only households can be eliminated by weighting by age (POQ, Vol 70: 88-98).

So the bottom line would seem to be that we don't need to worry yet, but as the number of cell-only households grows it likely will become more problematic.  One thing for sure: there are a lot of survey methodologists focused on the problem and the papers keep coming.  More on that story as it unfolds.


Online Growth Slowing?

Here is something I missed when I looked at the Inside Research issues on online research.  The ESOMAR pub, Research World, brought it to my attention. 

Worldwide online research is slowing down.  The growth rates for the last few years look like this:

  • 34 percent in 2004
  • 31 percent in 2005
  • 20 percent in 2006

They attribute this to the maturing of online in the US market but also hypothesize that "slower growth may also point to growing concern over the low quality of Internet samples."  Interesting.