Previous month:
August 2007
Next month:
October 2007

Posts from September 2007

So, How Do You Feel About .. . ?

One of the more interesting sessions here at the ESOMAR Congress was one titled, "Measuring Emotions."   The Europeans have been into this for some time and I was curious to get a better sense of what is all about.  Much of it revolves around a field called cognitive neuroscience which essentially is about how our minds work and the physical manifestations of mental activities of all kinds.  For example, when we are nervous our hearts beat fast and our hands  sweat.  You can find a pretty good summary of the field and its potential implications for MR in a paper from last year's Congress.   Most of the applications to date have been around advertising and branding.

A lot of the early thinking around neuroscience was a bit impractical because it often talked about measuring physical changes in the brain or skin.  A somewhat more practical application involves a form of eye tracking in which emotional reactions such as pupil dilation, movement, or rate of blinking can be measured in a qualitative type setting.

Measurement issues aside for the moment, it strikes me that there is a potential paradigm shift here.  Most survey research involves us asking people about how they feel, what their intentions are, and what they recall from previous events.  But the neuroscience argument is that only a small fraction of this information is available in the respondent's conscious mind, and even then they may have great difficulty expressing it because of verbal overshadowing in which our attempts to describe an emotion or event actually end up changing our perception of it.

But back to measurement.  One interesting line of research argues that the best way to capture respondent emotions is via metaphor.  In practical terms this seems to mean having respondents choose their responses from among a set of images that unambiguously represent a range of emotions.  The Web is, of course, the ideal way to do this.

Cool stuff.   Just how well these issues can be thought through and implemented remains to be seen, but there are researchers out there doing this kind of work for real companies and those companies are actively using the results in their businesses.  As noted above, much of this work has been around branding and advertising, but if it works there it can work elsewhere.


Engaging Respondents

I spent two and half days this week at the ESOMAR Congress, a gathering of something like a thousand market researchers from over 70 countries. Use of cool gadgets in Web surveys was once again high on the agenda, this time with papers from Harris Interactive and Vision Critical. The gadgets were of the usual sort—slider bars, positioning objects (like brands) around on the screen, some virtual shopping. The session was titled "Respondent Engagement" and the thrust of the argument was that these devices help to maintain respondents' interest, an obvious need in the current research environment.

I had seen the Harris stuff before and the presenter and the conclusions remain the same:

  • These methods of answering take longer than conventional methods such as radio buttons.
  • Respondents reactions to them are somewhat mixed although mostly they find them a bit more difficult.
  • The measurements obtained have somewhat lower statistical reliability than the traditional methods like radio buttons.
  • You lose some respondents either because of technical problems or distaste for the design.

It also seemed to me that a session on engagement ought to have looked in more depth at two key measures of lack of engagement—termination rates and satisficing behaviors—but not much was said on that topic.

The Vision Critical paper offered some enthusiastic cheerleading for these techniques, although the data they presented were not particularly convincing nor the data as expertly analyzed as in the Harris paper. They made great hay out of respondents declaring by a substantial margin that they would participate in more surveys if designed this way. However, the Fusion survey also had a much lower completion rate than the standard survey, but the presenters did not factor these respondents who voted by closing their browser into their calculations.

The discussion at the end of each paper was probably the most interesting part of the session. After Harris paper the room started out with considerable enthusiasm for these techniques, but as people began to grasp the technical problems and some of the potential bias they became less enthusiastic. The general sense seemed to be that these things should be used with caution and in moderation. The discussion after the Vision Critical paper got downright heated and the presenters were repeatedly challenged about their conclusion that these devices were an effective and harmless way to create and maintain engagement. (At one point the father of one of the presenters—and a co-author on the paper—jumped up, shook his finger at us, and scolded us all for not understanding the high science being presented!)

Sitting in this session and the one the previous week on pretty much the same topic at the earlier ASC Conference I found myself thinking three things:

  1. Researchers seem to like these devices more than respondents do.
  2. Because their appeal seems to vary among different types of respondents they have the potential to create bias, and the last thing we need in a Web survey of access panel members is more bias.
  3. The central problem with these things may be that they are non-standard interfaces, that is, we don't encounter them in other applications on the Web. Mostly when we give information on the Web we fill out forms and that's the interface people expect and are comfortable with.

I hate sounding like a Luddite, but I'm afraid that this is a wave I've not yet caught.


“The Lives of Others:” A Tale of the Long Tail?

I am here at the ESOMAR Congress in Berlin and today heard a keynote by Florian Henckel von Donnersmarch, the Oscar winning director.

The obvious question: why a director as keynoter at an MR event?

The answer: without MR the movie might never had seen the light of day. The film had no mass appeal and virtually every distributor in Germany rejected it as "too dark and too intellectual." It just did not fit the tastes of the general theatre going public, the center of the normal distribution. One distributor was creative enough to ask its MR staff to find the right audience and to test it with that audience. By putting together a test audience composed of exclusively of college graduates rather than a representative sample of the total population the researchers estimated that two million Germans would go to the see the movie if properly positioned and advertised. That's the definition of a hit movie. When it was all said and done 2.3 million Germans went through the turnstiles.

While the term never came up, this struck me as a classic example of the long tail. The movie holds little appeal to the general population for which market researchers generally advise their clients to create products. Still, there are enough people out in the tail of the distribution to make the movie a huge success if those people can be identified and marketed. It's a case of MR supporting innovation, art, and culture. An unusual tale.


Fascinated with Gadgets

I've spent the last two days at the Association for Survey Computing's conference they've called Challenges of a Changing World. One continuing theme from the earlier workshop on the Interview of the Future is the use of more interactive features in Web surveys.

On the one hand, you might think that it's about time. Almost from the start of the Web survey movement its evangelists talked about the Web's ability to dramatically change how surveys are conducted by creating a more interactive and engaging experience for respondents. We finally are seeing things like sorting exercises that you accomplish with a mouse, slider bars and temperature gauges of all shapes and colors, increasingly elaborate virtual store shelves, and, the most obvious of all, video of an interview reading the questions to the respondent.

Then again, while these things really are very cool to see and their creators are completely enchanted with them there is not a whole lot of evidence that they help in any way. The well-designed experiments I've seen mostly conclude that these devices don't really improve data quality in any measurable way. Sometimes the results are a little different, but it's hard to tell whether those differences are positive or negative. Sometimes respondents give more positive feedback on survey design. And sometimes they actually seem to hurt us in any of a number of ways. For example,

  • Many require special browser plug-ins or enabled software such as Java that can cause them to load slowly or not at all, thereby forcing some respondents (usually a small percentage) out of the survey.
  • Some classes of respondents just don't like them and so you see differential breakoff rates that might create an age, gender or attitudinal bias. For example, older respondents may not like them much at all while young males think they are the coolest possible thing.
  • They tend to take longer to use than conventional interfaces like radio buttons and extending the length of the interview is never a good thing.
  • They add to the programming load making the surveys that use them more expensive and longer to get to field.

My personal (totally untested) hypothesis is that these sorts of things are still not in the mainstream of how people interact with the Web. Whether you're talking about buying books at Amazon, searching Wikipedia, or even posting to YouTube the Web is still mostly about filling in forms. Why should a survey be different? As long as these gadgets are not standard fare on the Web generally I don't think we can expect respondents to embrace them in our surveys. This may be a generation thing, as one of the enthusiasts suggested, or may come hand-in-hand with Web 2.0 as suggested by another. We will just have to wait and see.

In the meantime, I hope the testing continues. It's very interesting stuff. We have another test with slider bars in the field now and it's going to be interesting to see how it turns out.


Final Post on “Envisioning the Interview of the Future”

In addition to a lot of fascinating discussion today about avatars, social presence in the interview situation, deception in online and offline communications modes, voice recognition, and "video mediated communications" I collected some interesting observations.  Well, interesting to me.  In no particular order:

  • When it comes to all of these new and complex technological innovations the single most important question to ask is: does it help and if so how?
  • A standard theme in the current Web survey literature is that most gadgets that designers think are cool (slider bars, sorting with a mouse, etc.) are not well received by all respondents and there is no evidence that they help get us better data.  Are avatars any different?
  • Why would anyone think that surveying people in Second Life makes any sense? 
  • Our understanding of what makes a good interviewer is still very incomplete.  I now know this because even this group of experts is baffled about how to build one from the ground up.  When you have to make decisions about when to smile, how to structure a conversation, when to press the respondent and when to back off, when to smile, when to move your head, and so on you see just how tough it is and how little we know.
  • No one is quite sure to whom respondents think they are actually responding to?  The interviewer?  The computer (in the case of a Web survey)? The survey designer?  The client?  Someone made the point that using the client's logo in a Web survey reinforces the point that the answers are being delivered to the client.
  • Much of the fascination with avatars is based in the widely held belief among methodologists that face-to-face interviews get the best data. But I'm not sure that we dug deep enough to understand why and then let that guide us to a new paradigm we can deploy online.

And finally, the best quote of the conference from Justine Cassell of Northwestern University: "Every design choice has an effect and if you don't know the effect don't use the design."