Previous month:
May 2007
Next month:
July 2007

Posts from June 2007

Mail to Web?

We are crafting a response to an RFP that contemplates transitioning a mail study to the Web.  I've been asked the question, "What kinds or problems might we encounter?" 

Tough question.  I am hard pressed to think of studies I've seen that take a good, systematic view of the issue.  So mostly, I am guessing.  For what it's worth:

  • Expect the response rate to drop.  At least 30 percent of US households do not have Internet access but I expect all of them have access to a pen or a pencil.  Since this is customer sat work for an insurance company the proportion without Internet access is probably smaller than 30 percent, given the SES and age bias associated with Internet access.  More serious is the natural affinity that people seem to have for paper and pencil as opposed to Web.  I have seen a couple of studies which let the respondent choose the mode and mostly they seem to choose paper.  Web is just too much trouble for a lot of people.  For this reason it may make sense to provide a mail questionnaire as an option at the time of contact.
  • Expect a different demographic bias.  I would expect there to be significant demographic differences between those who have previously responded by mail and those who will respond by Web.  In a general population study this might not be a major problem because one could correct with demographic weighting.  In this case, however, the demographics of the sample frame (the client's list of customers) may not be known and so there may be no clear target to which the survey results can be weighted.
  • Don't expect mode effects of the magnitude we sometimes see in phone to Web conversions.  Since both are self-administered in a visual mode the normal sources of mode effects are not at issue.
  • Expect to redesign the questionnaire.  There is a tendency to make the Web questionnaire an online version of the paper document, all the way to look and feel.  The rationale is data comparability across the two modes.  But a redesigned Web questionnaire can make dramatic improvements in data quality by including edits, validations, and automated skips that virtually eliminate the errors made by respondents with paper questionnaires.

Of course, your results may vary so parallel testing is essential.  If you are lucky, the differences may be insignificant and can be ignored.   More likely you will see  some  of  what I have described above  but you also may be able to correct for them over a couple of rounds of experimentation.


Is that somewhat satisfied or very satisfied?

Andrew Cober has been working with an Energy client for whom we do a phone study.  While monitoring the client heard respondents struggling some with questions like this:

Would you describe the price you currently pay for electricity as very reasonable, somewhat reasonable, neither reasonable nor unreasonable, somewhat unreasonable or very unreasonable?

So Andrew and the clients decided to run an experiment using an alternate format (known technically as "branching").  Half the respondents were randomly assigned to the question format above and the other half got a format that goes like this:

A.  Would you desscribe the price you currently pay for electricity as reasonable, neither reasonable nor unreasonable, or unreasonable?

B.  Is that very reasonable or somewhat reasonable?

This format is popular in political polling and in fact was invented by one of the pioneers in the field, V.O. Key.  Probably the landmark study on the topic was done by Krosnick and Berent and showed that data collected with this branching technique had greater statistical reliability than the one question format.  By strange coincidence I heard a paper at the last AAPOR Conference by Doug Rivers (a colleague of Krosnick's at Stanford) that argued the Krosnick and Berent study was flawed because it ignored missing data, something that the branching format produces more of.  Rivers showed that when you bring missing data into the equation, the one question format outperforms the branching format.

All of that may have a certain academic interest but it doesn't help Andrew with his problem which is to figure out what to recommend to his client. He might start by citing the literature which seems to say that while respondents might struggle more with the single question format in the end it produces better data.  In his test Andrew was seeing roughly equivalent levels of nonresponse (DK or Refused) across the two treatments.   There were,  however, some differences in the distributions as shown in the graph. For example, the branched formatExp_2 produced a few more responses at the extremes.  It's difficult to know whether that is a good or a bad thing.  But what would seem to me clearly to be a bad thing is the slightly elevated "no opinion" group.  This is the combination of respondents who say "Neither reasonable nor unreasonable" and those who refuse to answer or say they don't know.   The single question format produces 11 percent in these combined categories while the branched version produces 16 percent.  Andrew tested this with two different questions and produced roughly similar results for both.

So my advice to Andrew is to stick with the one question format because:

  1. The most recent research on political attitudes is finding that the one question format has greater statistical reliability.
  2. Your tests indicate that we get more non-committal responses with the branching format even though it produces more responses at the extremes.
  3. The branching format takes longer and therefore costs more in terms of interviewing time and respondent good will.

Someone else might look at these data and come up with a different recommendation, and another experiment in another domain might yield different results.  After all, the stock answer to every methodological question is, "It depends."  That's what makes this survey methods stuff so intriguing.