Drop-Downs May Deliver Defective Data
Are Sweepstakes Effective as Incentives for Web Surveys?

Are Those Long Surveys Worth It?

I've just read two papers with very similar designs and nearly identical conclusions. (One by Arthur Lugtigheid and Sandra Rathod for Survey Sampling and one by Mirta Galesic at the University of Maryland.)  Both researchers looked at three data quality indicators: time spent answering questions, the percent missing data or DK responses, and the number of characters keyed in open ends.  de Jong also looked at use of a special slider bar and Galesic looked at differentiation in grids, that is, degree of "straightlining."

Their results show significant changes in response behavior as respondents get further into the questionnaire:

  • They spend less time reading and answering questions.
  • They choose DK or other non-substantive responses more often.
  • They key fewer characters in open ends.
  • They are less likely to use gadgets like slider bars.
  • Their responses to questions in grids show less differentiation.

How long does it take before these behaviors evidence themselves?  Lugtgheid and Rathod say after about 20 minutes.

As an industry, we have tended to believe that if we can keep respondents on line, that is, achieve a relatively low termination rate, then the Web questionnaire is doing its job.  In an incentive driven mode like Web has become, respondents may finish but at what price?

Comments