Previous month:
August 2008
Next month:
October 2008

Posts from September 2008

The wireless only story still on hold?

Colleen Carlin pointed me toward an interesting post on the Pew Web site.   It compares results from landlline and cell phone samples from three polls on the presidential election.  The polls were conducted in late June, late July/early August, and mid-September.  The key findings:

  • In all but the first poll in late June, the landline samples show a tied race.
  • In all three polls, the cell phone samples show Obama leading by around 20 points.
  • When you combine the two samples the impact is minimal--a two point increase for Obama in June and a one point increase for the two other polls.

These results suggest to me that the story we have been hearing for some while about the impact of  wireless substitution is holding.  That story has been that unless you are especially interested in the younger demographic the impact of not including cell phones in samples is largely corrected with traditional demographic weighting.  Given the substantial cost often associated with calling cell phone samples we should continue to ask ourselves whether it is worth it for the sake of one and two point differences in estimates.

However, there is a caveat and it relates to subject matter.  We probably should not infer too much from political polling.  Other domains such as customer satisfaction or new product research may be very different.  As always, more research is needed.


Have we lost touch with reality?

No, this is not a comment on John McCain's surprising performance in the presidential polls.  Rather, it's the title of a recent one-day conference put on by the Association for Survey Computing in London. While  I admire these guys for their creative approach to meetings in places like The Old Doctor Butler's Head they also manage to bring together good people to talk about current survey issues. 

In this particular instance they brought together eight presentations around the theme of respondent engagement.  For those of you who have not been paying attention this is the new buzzword to describe attempts to improve online panel data quality by creating better online surveys, although generally "better" in this context has come to mean using Flash or similar tools to liven up the presentation of the survey and give respondents cool gadgets like slider bars and sorting exercises where you actually move things around in place of those old boring check box surveys that are the norm. 

Now I admit that I have been skeptical from the beginning because it all has been posited on the simple belief of researchers that Flash is better.  There never wre any clear and quantified signals from respondents calling for us to move in this direction.  And the fact that the main evangelists all have use of Flash-enabled surveys at the heart of their business models may be part of why I have been suspicious.  More importantly, we have tested slide bars in our work with the methodologists at UM and seem to continually find that you lose some respondents because of technical problems, that answering with these devices takes longer than with, say, radio buttons, and the data you get from them is no different.  Most proponents of Flash find much the same when they do systematic tests, but they often counter that with respondent satsifaction data that notes how cool and fun it is.

I invite you to review the papers and come to your own judgment.  And then at the risk of biasing you I would add that in various conversations I have had around the industry the feeling and more is that Flash is eye candy and not really a solution to the respondent engagement problem.  Our real problem is that our surveys are too long, or too complicated, or just really boring and Flash is another attempt, yes, to put lipstick on a pig.


When Voters Lie

This is the title of a recent article in the Wall Street Journal.   This is a well-reported piece on a wide variety of topics and it even inlcudes research we have executed for Roger Tourangeau as part of our ongoing work with survey methodologists at ISR.  But arguably the most interesting stuff is a reference to a telephone/mode comparison study done by Harris Interactive that seems to show social desirabilty bias operates on a whole set of issues that many of us might not think of as "sensitive."  For example:

  • 78 percent of phone respondents reported that they brushed their teeth twice a day  compared to 64 percent online.
  • 58 percent of phone respondents says they exercised regularly compared to just 34 percent online.
  • 56 percent of phone respondents claimed that they went to church, synagogue, or mosque most weeks compared to 25 percent online.

Of course, one can't rule out the possiblity of some sample bias here.  Maybe people who sign up to do surveys online are just a lot different from the rest of the population.  But there is enough consistency with other work on social desibaiblity in other modes (inlcuding paper and pencil self-administration) to suggest that the bias introduced in interviewer administration is more substantial than we might sometimes think.