Previous month:
September 2011
Next month:
November 2011

Posts from October 2011

Another data point on social media privacy

The buzz about privacy protections in social media research has fallen off some since August's Great Privacy Debate. That doesn't mean the issue is settled by any means, only that the professional and trade associations who have been driving the debate have taken the feedback they've gotten and to one degree or another are back at the drawing board.

For me the key issue is not traditional survey research ethics, Terms of Use or legislative privacy protections. It's really about the expectations of social media participants. The industry's future will be bleak if we rely on methods that rightly or wrongly appear to abuse our research subjects. In this connection most of the survey data I've seen say that people have expectations about privacy in social media that are at odds with those of at least some MR practitioners.

So I found this little piece in The Market Research Bulletin to be interesting. In it some folks from Maritz report on a survey of Twitter users suggesting that when someone complains about a brand on Twitter the vast majority of those people hope that the brand follows up with them. I guess it's a cry for help. Only about a third report such follow-up and of that third 80% plus are pleased that they did. So in this instance there does not seem to be an expectation of privacy. They might not like it if their tweet is featured in a commercial, but they like the personal attention.

While I like the study I feel compelled to cry "foul" on a methodological point. The methodology description includes an estimate of sampling error which makes no sense with a nonprobability panel. While this is an all too common misuse of the statistic in MR it's a bit more objectionable here given that in the next paragraph Maritz boasts (justifiably) that they are ISO 20252 certified and the standard clearly specifies that standard error should only be calculated for probability samples.

Nonetheless, it's still an interesting little study.

Mail survey renaissance?

In today's technology-driven MR industry it feels beyond retro to even mention mail surveys. Yet in the broader research industry and specifically the sector that deals with social policy research mail surveys are experiencing a major resurgence. Mail-survey-13The problems of coverage and nonresponse in telephone surveys that have driven MR online are driving social policy researchers to seriously evaluate mail, either as a prime survey mode or as recruitment to Web or telephone.

The attraction is the Delivery Sequence File, a list of all of the addresses serviced by the US Postal Service. As you might imagine, direct mailers have used this file for years and have augmented it with all sorts of additional data to make targeted mailings possible. Government surveys, especially those using in-person interviewing, have gradually been transitioning to it as a cost effective way to do area probability sampling.

I blogged about this 18 months ago under its common name--addressed-based sampling. I'm bringing it up again because the current issue of POQ has three interesting articles on the topic. I especially recommend Vincent Iannacchione's synthesis of the current research. Obviously, I don't expect a major shift to ABS within MR.  It has serious limitations in terms of cycle time and questionnaire complexity that make it a poor candidate for most of the work we do. But in that part of our work where a representative sample is essential, it increasingly looks like the best option.

What’s in a word?

There has been a bit of online buzz lately around what we should call ourselves in the new era now dawning. Should it be Marketing Research or Market Research? It made me wonder in a fanciful sort of way whether we should be thinking about the second word as well. Does Research still fit?

So I grabbed the definition of research from five online dictionaries (Oxford, Merriam-Webster,, Cambridge and Google) plus Wikipedia, did a little bit of editing to standardize some word forms and created this word cloud using Wordle.









I repeated the exercise with another word we hear tossed around to describe our future: consultant.









I leave you to draw your own conclusions.

Back to the Future

I have been in a sort of accidental exile from social media since just before I left for Amsterdam. It was a busy week with the Congress, ISO Technical Committee meetings and WAPOR all in the same week with no time left for blogging or even monitoring Twitter. And then an all-too-brief family vacation enjoying some lovely Belgian cities (Bruges, Antwerp, Ghent) with the help of the best beers in the world followed by the standard punishment our day jobs visit upon us when we are out of the office "too long."

All of that behind me I have begun to catch up with what's been going on in MR's social media echo chamber and well, it's like they say about American TV soap operas: it's no big deal if you miss a bunch of episodes because when you finally tune in again the story line hasn't advanced all that much. One thing that has struck me is the plethora of reports, surveys, presentations and general prognostication about the future of our industry. Maybe it's because it's the conference season. At the highest level they all say pretty much the same thing and only disagree about how fast all of this will happen. But when you look closely at most of the surveys in particular and evaluate the methodology from a survey research perspective you can't help but be amazed that these are designed, executed and reported on by research professionals. Convenience samples and overly-hyped findings are the norm. There is story telling for sure, but it sometimes borders on fiction. As I said, at some very high level they all come to more or less the same conclusions but if I want to use any one of them to decide where to place my bets and how I need to evolve my organization to continue to be competitive in a changed industry I'm not going to find the confidence I need in any one of these studies.

In that sense this is a sort of parable for our industry and its future. We've come to recognize that all of our sources, all of our methods have important weaknesses and our story telling needs to take that into account. The smartest commentators among us see that the real future lies in our ability to consider a broad range of relevant data, understand their strengths and weaknesses as related to business problem at hand, and synthesize disparate findings into a compelling story that helps our clients make very specific business decisions. I'd love to see someone do that with all of the research being presented about the future of our industry. I wonder if the story would be any different.