Waiting for mobile
Finally, the real issue?

More thumbs down on eye candy in surveys

The current issue of Inside Research has a short note on some work done at DMS looking at respondent reactions to rich text and graphical interfaces in Web surveys. It's hard to go to a MR conference these days without seeing someone present a passionate argument for the use of Flash or various kinds of interactive gadgets as a way to increase respondent engagement and therefore improve data quality. Readers of this blog know I am skeptical. Nonetheless, I tracked down the presentation from Chuck Miller at DMS.

It turns out to be an interesting piece of research. (There is a video of Chuck describing his research here.) He took sample from a variety of sample sources and allocated it out to three treatments: (1) forced into a plain text survey; (2) forced into a rich text survey; and (3) allowed to choose either plain text or forced text. Quotas were used to force the demographic distributions. The key findings:

  • Three out of four respondents in the choice treatment chose plain text.
  • Choice seemed to be driven by lifestyle (e.g., lots of time online, electronic gadget geek, etc.) rather than demographics. Rs choosing rich text also reported taking lots of surveys.
  • Rs in the rich text group gave higher ratings of the survey experience than those in the plain text group, but mid-survey terminations in the forced rich text group were higher than the forced plain text group.
  • For the most part, there were no differences in response data itself except for the slider bar questions where Rs using the slider bars reported significantly more time watching TV, listening to radio, and reading magazines than those using standard radio buttons.
  • There were no significant differences across a range of data quality indicators except for straightlining which was higher in the plain text treatment. However, I have always felt that the standard radio button grid lends itself to straightling while many of the rich media answering devices do not, but that doesn't mean the latter is yielding more thoughtful answers.
  • Overall, the best approach seems to be to let people choose their interface, but the practicality of that with today's software tools is questionable.

I am reminded of research, now several years old, that looked at slider bars and sorting gadgets back when we first figured out how to incorporate them into surveys. That research mostly reported that these devices caused us to lose more respondents (either because of technical issues, confusion, or not liking the device), generally took longer to answer, and produced the same results as conventional answering devices.

This whole debate, if there is one, is symptomatic of a basic problem in the evolution of online research. Someone gets an idea, sometimes builds a business around it, and advocates it loud and far as the next big thing with no real data or facts to substantiate the claims. Because it's cool or fast or less expensive we buy into it, only to eventually find an emperor with no clothes.

We really need to do better.

Comments