Previous month:
June 2010
Next month:
August 2010

Posts from July 2010

Tough review from Facebook users

One hears a lot these days about social networks as potential sample sources. One version of the argument says we need them because they can deliver a demographic that is hard to come by in surveys, even with other online methods. Another says we can increase respondent engagement either by replicating the social network setting or actually interviewing people in situ.

My morning email included a link courtesy of Computerworld to a piece with the eye catching headline, "Facebook user satisfaction 'abysmal.'" The report is based on a report from the American Customer Satisfaction Index, a group that has been measuring customer satisfaction with an ever increasing number of companies and government agencies for the last 15 years. (Here I disclose that my company does a hefty share of the ACSI work, although not this particular study.) Facebook and MySpace both had scores in the low 60s while YouTube and Wikipedia were around 15 points better.

To quote from CW article, "When asked what they didn't like about Facebook, users reported privacy concerns, advertising, interface changes, navigation problems and constant notifications about "annoying" applications." I leave it to you to decide how survey requests might play in that context.


It works!

I was hoping this story describing the use of Twitter to track public sentiment on important public policy issues and predict election results would die a quiet death. It seemed to have done just that until Robert Bain dug it back up in this month's issue of Research. A close reading of the study shows that sometimes it works and sometimes it doesn't, and the authors can't seem to make up their minds whether Twitter is reflecting the sentiment in the larger population or driving it. More importantly, they never try to answer the key question: why should this work at all?

The Research article goes on to describe a similar study by Tweetminister that apparently predicted the share of vote in the recent UK election "on a par with Ipsos, Poulus, and Harris." In this case, Bain has the good sense to ask why this should work and the Tweetminister CEO explains, "What happens on Twitter doesn't stay on Twitter. It influences what happens on mainstream media and in other places." Really? So in this case it seems that Twitter is driving public opinion rather than reflecting it.

Within moments of reading all of this I saw a link on Twitter to a fascinating study by the folks at Pew who have looked carefully at the relationship between social media and mainstream media. Two findings are relevant here:

  • Politics accounted for just 6 percent of posts to Twitter over the 12 month study period. Twitter posts are mostly about technology (43 percent), often Twitter itself.
  • Stories don't jump from social media to the mainstream media but the reverse. Only one story, "climategate" created a stir in the mainstream media after it caught on in social media.

The arguments here are eerily familiar to those of us who were around in the early days of online panels. No theory was ever put forth as to why this approach to doing research would work, only the empirical argument, "It works." And the favored test case: predicting election results.

Some may ask, "Why do we need a theoretical basis for what we do? As long as it works, that's all that matters." The problem, of course, is that without theory you never know when it will work and when it won't. Surely we learned at least that much from online panels.


Hoisted on their own petard

The current issue of Inside Research is out and it includes the mid-year update on online spending. Despite the headline that characterizes the growth as "soft" the numbers show the MR companies reporting to IR (and mine is one) estimating that their 2010 revenues from online will be around $2.2 billion, a 12 percent increase over 2009. By comparison, increases for 2009 and 2008 were a mere 3 percent and 2 percent respectively. Of course, a substantial amount of the 2010 growth is no doubt due to an improvement in MR bookings generally, so I guess you could say that 12 percent growth is soft when compared to 18 percent for 2007 and 21 percent for 2006.

In addition to the numbers the article has a series of anonymous quotes from reporting companies and one in particular caught my eye:

 "More and more clients shifted their data collection spending toward lower cost alternatives in '10. However, given the lower response rates that online data collection provides relative to other methodologies, we are finding clients reconsidering their decisions to make this move."

Uh? How can it be that there is anyone left in MR who believes that response rates from online panels have anything to do with quality or representivity? When you're working with a probability sample competently drawn from a high quality frame lower response rates immediately cause you to worry that representivity has been compromised. But with a panel, where representivity was sacrificed long ago, response rate is meaningless as a measure of quality. In fact, most sampling statisticians argue that you should not even bother to calculate it. If you do choose to calculate it then you need to do it at both the recruitment stage and the individual survey stage. When you do that, the numbers get really scary

Let's assume a US panel of two million adults. If we say that the US adult population is 200 million (it's actually a bit more) then by definition the response rate at the recruiting stage for the panel was around 1 percent. Now even if I get a whopping 50 percent response rate to my survey the total response rate is only about .5 percent (1 * .50). Yes, one-half of one percent.

There is an irony here. One of the arguments for moving to online back in the 1990s was that response rates in traditional methodologies had fallen so low (10 percent or less) that their results were no longer reliable. I'm sure you can finish the thought.


Social media research absent the hype

 I just sat through an ESOMAR webinar on user-generated content by Niels Schillewaert and Annelies Verhaeghe from Insites.  Nicely done.  It was such a relief to be spared the usual hype about the social media revolution and attendant trashing of traditional research.  They did a great job of showing how what they like to call "neo-observational research" can generate insights that you just don't get with traditional methods.  But they also made the key point that those insights can be used to improve traditional research rather than replace it.  A sort of double whammy for clients!

They're going to present a much more expanded version of this as part of a workshop at the ESOMAR Congress in Athens in September.  Definitely worth considering.


Summer fun

Ray Pointer has initiated a fun little discussion in the NewMR group on Linkedin. The topic is, "Cautionary tales for market researchers" and invites examples of failures of MR to help new researchers "develop a fully rounded awareness" of what we do.  The classic (no pun intended) example for most of us is the New Coke fiasco.   I've now learned about iSnack2.0.  Really! Somebody did it.  How can I get hold of one?

It will be interesting to see what else makes the list.