Previous month:
September 2010
Next month:
November 2010

Posts from October 2010

The problem that won't go away

A few weeks back I saw this press release from Forrester which is worth quoting at length:

"Social networking continued to grow over the past year . . . The number of people who joined social networks increased by 11 percent in Europe, 18 percent in metro China, and 11 percent in Australia. By comparison, North America saw slightly less growth, with only an 8 percent increase. On the other hand, between 2009 and 2010, no markets exhibited growth in the number of people who create social content."

So it seems that the 90-9-1 rule still rules.  MR's fasciantion with social media is all about content, yet content creation is the province of only a very tiny but noisy portion of the social media population and those are the people we are listening to.

Taking a step back,  this is not unlike the problem of online research in its previous incarnation: panels.    While panel companies typically don't share their recruitment success rates most of us suspect that they run at less than 1 percent.  Response rates (if I may use the term in this context) Soapbox have mostly cratered to the single digits, so we are actually interviewing a very tiny slice of the population, not by design through systematic sampling but by the happenstance of personal choice.  The latest trend toward river enlarges the pool but probably does not have a material impact on the percent of people who see the offer actually taking it up.Taking a step back further still to the salad days of "traditional research" we will remember that continually falling response rates was one of the oft-cited reasons for us to abandon it as quickly as possible. 

Here I could launch into a rant of why low response rates with probablity sampling are much better than even lower response rates with nonprobaibility samples but that's not my point.  My point is that the vast majority of people don't want to talk to us, to tell us--complete and total strangers--what they think.  Regardless of our methods or setting.   In fact, they don't seem to want to tell anyone outside of conversation with real friends.  And despite all of the hype it seems clear that the conversation has not really moved online.

So what are we left with?  Listening to the guy in the seat behind me on my last trip to the west coast who did not shut up for more three minutes in five hours?  Or the cab driver who missed the freeway exit because he was too focused on explaining to me why HOV lanes are a bunch of hype?  Are these the people who are going to help our clients reduce uncertainty in business decisions?

As my grandmother used to say, "Empty barrels make the most noise."  We need more people to talk to us and technology is not offering up a solution to that core problem.


KISS

One of the links in the blog roll down on the right is to Jakob Nielsen's site, usit.com. Nielsen does a steady stream of Web usability research and has published a number of excellent books on the topic. It's always interesting to look at his work and see what lessons there might be in it for online questionnaire design. He has a new post this morning describing the usability problems people encounter when a Web site's user interface does not behave like they expect it to behave. He uses the term "mental models" to describe a user's beliefs about how a UI should perform. Those beliefs are formed from experience with the particular site and with other regularly-visited sites. Designers, being very creative people with a wide variety of skills, may have other mental models based in what is possible or their own sense of how a UI "should" behave. Things go awry when the user's mental model conflicts with the mental model the designer used when building the UI.

I find this easy to relate to because I still don't understand the mental model of the person or persons who designed the UI for Facebook. But I also have seen evidence of the problem in survey design. Take grids, for example. The standard grid has the attributes down the left side, the answer categories across the top, and the respondent is asked to select an answer for each row. There often is helpful little shading across alternating rows to keep the respondent on track. This works well because we westerners read left to right, down and left to right again. But sometimes for reasons good and bad we have tried a vertically oriented grid in which the respondent is asked to select one answer in each column. Even with helpful vertical shading, respondents struggle. They take longer to answer and in some cases give up altogether because the question does not fit their model of how a grid in a survey is supposed to work.

I have long believed that it's important that a Web survey behave like every other Web application that an Internet user encounters in his/her daily online interaction. When Amazon, Google, and even Facebook start using slider bars and drop and drag gadgets then we should, too. The empirical evidence suggests that these things have some advantage based in their novelty but they also slow people down and may even cause them to answer differently. Of course, it may be that the professional survey takers who have been the staple of online research have much different mental models than the rest of us. But as we try to expand on that base with broader recruitment across the Web through river sampling and similar techniques we likely will find that simple, straightforward, and consistent survey designs serve us best. Before you argue, have a look at this previous post describing what respondents prefer when given a choice.


Let's ban the R word

While working on a paper over the weekend it suddenly hit me that "necessary but not sufficient" is the perfect way to describe the whole array of techniques that have emerged over the last few years in the name of improving panel data quality.  Not to be confused with the Goldratt "business novel" (now there's a concept!) of the same name I mean it here in the legal sense of required but not enough. NOTR   Yet from most of what I read the goal of online seems to continue to be to produce representative samples, a goal we can never reach as long as Internet use is less than universal and we continue to generate samples from databases of volunteers who are themselves a tiny franction  of those online. Nonetheless, we keep talking about cleansing and routing and weighting as the path to representative samples.  What's especially troubling about this is that deep down we know better.  But we can't seem to stop. It's not that we can't use online to do very good work and to generate a lot of really smart insights.   It's just that creating representiative samples is not on the list. 

At the recent ESOMAR Congress Frederic John described online panels as "a Trojan horse that we have gleefully brought to our clients as the solution to all our problems that ultimately may damage our credibilty beyond repair."  So please let's stop ourselves before we make it worse.  Let's ban the R word.

 

 

 


“Facebook politicians are not your friend”

I am a huge fan of New York Times columnist Frank Rich. His column is the first place I go every Sunday morning and sometimes late Saturday night when you can get his Sunday column early online. On the other hand, I am whatever the opposite is of a fan of Malcolm Gladwell whose books always seem to me to be about pop science used to explain the obvious. Their paths more or less crossed in the last week with pieces that go after the social media hype in interesting ways. Both take up the issue of Facebook and Twitter in politics and both dismiss their influence pretty convincingly.

The Rich piece gives me excuse to post one of my all time favorite Internet cartoons by pointing out that most of the political chatter driven by major politicians and political parties is as false and contrived as everything else they do.  Cartoon   Gladwell takes on some of the major myths about the political importance of Twitter (like the Iranian elections of last year), notes that connections in social media are inherently weak, and that any sort of meaningful personal action in politics or in life occurs not through listening to social media but through interaction with real friends. Word of mouth is important, but not when the mouth is Twitter.

One of the great clichés of the new MR is that brands need to listen to what is being said about them in social media. No quarrel there. But the corollary that social media conversation drives purchasing behaviors in meaningful ways, well, I think the jury is still out on that one.


Still feeling our way

In my last post I promised some short updates on the just-concluded ESOMAR Congress in Athens. Then my family and I set off on a driving vacation around the Peloponnese where thoughts of MR and the Congress quickly drifted away. I was reminded of this when the morning's email included a note from a colleague passing along the results of a survey that ESOMAR conducted prior to the conference using a predictive markets methodology. The survey offered 12 methods associated with "the new MR" and asked respondents (581 ESOMAR members) whether they would buy or sell shares in each method. The big winners: Netnography, Co-creation, MROCs, and Mobile. The also-rans generally attracted a third or less of the investment in these top four. At the Congress there were papers describing still more methods that were not included in the survey!

Seeing this brought back the unease I felt in Athens. As I noted in my previous post, the conference theme was built around the twofold challenge of the pressures on MR brought by a faltering global economy and the uncertainty about how the practice of research will evolve. For better or worse we have had for generations some pretty standard ways of collecting the facts our clients need to make better business decisions and we differentiated ourselves mostly by our analytic finesse and insight-generating abilities. Now all of that has changed. Those traditional methods are now in whatever stage comes after being "under fire" and the candidates to replace them are counted in the dozens. "Fractured" is a word that keeps coming into my head.

There is an old saying that everyone is entitled to their own opinion but not to their own facts. That doesn't seem to apply to MR like it used to. In a very real sense this is a very exciting time, more exciting even than the emergence of online 15 or so years ago. But I am the sort who likes clarity and right now the future of MR is anything but clear.  I can't imagine how our clients must feel.