“The Lives of Others:” A Tale of the Long Tail?
So, How Do You Feel About .. . ?

Engaging Respondents

I spent two and half days this week at the ESOMAR Congress, a gathering of something like a thousand market researchers from over 70 countries. Use of cool gadgets in Web surveys was once again high on the agenda, this time with papers from Harris Interactive and Vision Critical. The gadgets were of the usual sort—slider bars, positioning objects (like brands) around on the screen, some virtual shopping. The session was titled "Respondent Engagement" and the thrust of the argument was that these devices help to maintain respondents' interest, an obvious need in the current research environment.

I had seen the Harris stuff before and the presenter and the conclusions remain the same:

  • These methods of answering take longer than conventional methods such as radio buttons.
  • Respondents reactions to them are somewhat mixed although mostly they find them a bit more difficult.
  • The measurements obtained have somewhat lower statistical reliability than the traditional methods like radio buttons.
  • You lose some respondents either because of technical problems or distaste for the design.

It also seemed to me that a session on engagement ought to have looked in more depth at two key measures of lack of engagement—termination rates and satisficing behaviors—but not much was said on that topic.

The Vision Critical paper offered some enthusiastic cheerleading for these techniques, although the data they presented were not particularly convincing nor the data as expertly analyzed as in the Harris paper. They made great hay out of respondents declaring by a substantial margin that they would participate in more surveys if designed this way. However, the Fusion survey also had a much lower completion rate than the standard survey, but the presenters did not factor these respondents who voted by closing their browser into their calculations.

The discussion at the end of each paper was probably the most interesting part of the session. After Harris paper the room started out with considerable enthusiasm for these techniques, but as people began to grasp the technical problems and some of the potential bias they became less enthusiastic. The general sense seemed to be that these things should be used with caution and in moderation. The discussion after the Vision Critical paper got downright heated and the presenters were repeatedly challenged about their conclusion that these devices were an effective and harmless way to create and maintain engagement. (At one point the father of one of the presenters—and a co-author on the paper—jumped up, shook his finger at us, and scolded us all for not understanding the high science being presented!)

Sitting in this session and the one the previous week on pretty much the same topic at the earlier ASC Conference I found myself thinking three things:

  1. Researchers seem to like these devices more than respondents do.
  2. Because their appeal seems to vary among different types of respondents they have the potential to create bias, and the last thing we need in a Web survey of access panel members is more bias.
  3. The central problem with these things may be that they are non-standard interfaces, that is, we don't encounter them in other applications on the Web. Mostly when we give information on the Web we fill out forms and that's the interface people expect and are comfortable with.

I hate sounding like a Luddite, but I'm afraid that this is a wave I've not yet caught.

Comments