Previous month:
July 2010
Next month:
September 2010

Posts from August 2010

Mastery, Mystery, and Misery

Well over a decade ago when we first started doing Web surveys I had an on again, off again argument with colleague who used to say, "A Web survey should look like a Web site." My counter was that motivations of site visitors are different from those of survey respondents. Site visitors come of their own accord looking for information or to buy a book, a CD, or a camera and are willing to struggle with design elements to find what they are looking for. The amount of effort they are willing to expend is equal to the desire to find what they are looking for, tempered by the difficulty of finding if offline somewhere else. But survey respondents are only there because we've invited them and their motivation is weak. They are information providers rather than seekers; the effort they are willing to expend is considerably more limited; and if the task gets too hard or confusing many of them will just bail. None of which is to say that survey designers cannot learn a great deal from Web designers and Web usability experts.

I was reminded of this when I stumbled across an old post by the Web usability expert, Jakob Nielsen titled, "Mastery, Mystery, and Misery: The Ideologies of Web Design." His simple summary says it all: "Simple, unobtrusive designs that support users are successful because they abide by the Web's nature– they make people feel good." The ideologies:

  • Mastery is all about empowering uses and requires a simplicity of design in which they intuitively know what every elements means. "Understanding what they're being shown and knowing what they must do to achieve a desired effect—that's the stuff of mastery."
  • Mystery obfuscates choices and is engendered in users when designs drift away from simplicity to use novel or exciting design elements in the belief that simple is boring. People don't want puzzles to solve, they want to understand quickly exactly what they need to do to get the result they want.
  • Misery sets in when design oppress users and either restricts their choices or provides so many choices that users are puzzled about how to act to achieve their objective.

Nielsen obviously sees Mastery as key and concludes by reminding us that users are goal driven, have little patience with having to figure out what to do next, and view the Web more as a tool than an environment.

As researchers we understand that they first requirement is to write questions that are clear, easily understood, and unambiguous. But the Web requires that we also present them in a way that makes it as easy as possible for a respondent to answer them. Simplicity not gadgets and puzzles is the shortest path to that goal. For us, too, the Web is just a tool.


Are we there yet?

The August issue of Inside Research has one of their periodic reports from their Buyer's Roundtable. The August topic: Will Listening Trump Asking? The answer seems to lie somewhere between 'Maybe Someday' and 'Probably Not.'  Since this is a qual group of N=25 I'll go right to some verbatims:

"The nature of the data does not easily lend itself to in-depth insightful analysis . . ."

". . . it needs to be integrated with more traditional MR approaches and metrics."

"Fairly crude analyses and sentiment ratings are better, but still somewhat judgmental."

". . .the closer you get to it, the more you see the limits of social media as a decision-making tool."

"Our tracking continues to show that those voices are anything but typical."

"The only reason that some of the data is worthwhile is the low cost."

"Social media though has been critical in a crisis like the pet food contamination story. . ."

"Some of the monitoring tools are scary powerful, but still primitive from a marketer's perspective. . ."

"I do think this is a case of ignore at your own peril, but I don't think the industry has the right solution yet."

Of course, if my dogs could talk they might see it differently.

 


Insight

Tom Ewing's Blackbeard blog has a neat little post on the meaning of insight, one of those words that everybody likes to talk about as the main goal of research but no one is quite sure just what it means. Tom puts into terms we all can understand.  He writes:

You know it when you see it.  It’s the thing that makes you feel good about a piece of research work; it’s the “yes!” moment in a report; it’s the bit that has you jumping out of your chair and pacing up and down with the implications.

May we all have such moments and have them often.


So much for robopolls

For about the last week or so I have been getting regular calls on my home answering machine from Governor Mike Huckabee whom I gather is once again running for President.  While it seems to be the Governor's voice it also is a recording inviting me to do a survey by IVR.  Somewhere back in my blog archive there are a couple of posts about what the politicos like to call "robo polls," that is, RDD samples autodialed with a survey in which the questions are recorded and played back while respondents answer using the telephone keypad. I wrote those posts because a colleague had asked me about the validity of the methodology and specifically with regard to representivity.  I did my best to discredit it.

My posts hardly landed a blow compared to the job that Nate Silver of fivethirtyeight.com has done in a recent post exposing the methodology of the most well known of the robo pollsters, Scott Rasmussen.  There are the usual problems with Rasmussen's methodology--limited calling windows (5:00PM-9:00PM weeknights), no call backs, very low response rates, taking the first person who answers the phone, etc.--but Nate has taken things a step further by demonstrating that at best these calls can only reach about 50 percent of the population.  In fact, it's probably much lower.  Putting together the most recent data on cell only households with time use data from the American Time Use Survey Nate shows that the likelihood people are home or if they are home the likelihood they will answer the telephone is somewhere around 50/50.

I would like to think that this is the last nail in the coffin of robo polls but I know better.  It's just one more example of how little the consumers of survey research, whether in the media, the general public, or even our MR clients, understand about the underlying scientific principles of what we do. For this we probably only have ourselves to blame.