Previous month:
May 2008
Next month:
July 2008

Posts from June 2008

Lost in Translation

The 3MC Conference has been very interesting with lots of good presentations on things I know very little about. Mostly we've been hearing about large, ongoing, publicly funded global surveys that are executed by government agencies, the usual public sector focused private companies (e.g., Westat), and in some instances by global MR firms. The people executing these surveys have done a lot of good research on research and developed a lot of excellent processes to support what they do. The challenge is to figure out how to adapt this to ad hoc MR research on mercilessly short time frames.

Take the case of translation. Most of these folks are advocating a six step process for a well-translated standard questionnaire that works well in multiple languages, in different societies, and in different cultures:

  1. Translation
  2. Expert review
  3. Adjudication
  4. Final adjudication
  5. Pretest typically via cognitive interviewing
  6. Documentation

They have data that show that the lion's share of the problems are found in steps 2 and 3, but the hard stuff, the cultural stuff, generally shows up in step 5.

I have not asked the question: How long does this process take? The relevant question for us would more likely be: How quickly can we do this? I was chatting with someone from Harris Interactive about this and asked if they had a process anywhere near this systematic and if so how did they get it done on our short timeframes. The answer: "No and it's very scary." I can relate.


Size Still Matters

I sat through a session this morning here a 3MC with the interesting title, "Survey Agencies as Carriers of Multinational Research." On the face of it this was pretty unusual in that five global MR firms were asked to essentially present their capabilities for doing global survey work. As it turns out, a very large portion of the global public policy and social research being done by various governments in Europe is being done by commercial firms for the simple reason that they have the infrastructure and the technology to do it well. This obviously is a substantial business for them. TNS, for example, says that 25 percent of their total business is in public policy.

The presentations were all over the map with TNS being by far the most convincing about their ability to do global work and do it well. Most striking was just how much global leverage they all have. Four of them told us how many offices they have worldwide. GfK claims the broadest reach with 115 companies around the world. Then comes TNS at 80, IPSOS at 58, and Gallup at 40. Nielsen didn't give a number but their sheer size and the number of major companies in the Nielsen family is pretty overwhelming.

Equally interesting was the emphasis in the presentations by the three companies we most often compete against. TNS seem to emphasize their global infrastructure for execution. GfK emphasized their academic roots and their "best science" approach. IPSOS was all about their ability execute globally within five areas of specialization (e.g., advertising, opinion, CRM, etc.).

The Gallup and Nielsen presentations were not especially memorable, although anyone who can execute something like the Gallup World Poll in 152 different countries is pretty impressive.

 


It’s the Process, Stupid!

I am in Berlin at the International Conference on Survey Methods in Multinational, Multiregional, and Multicultural Contexts. Quite a mouthful and so it's known simply as 3MC. Yesterday I had lunch with some European colleagues involved in both ESOMAR and ISO. One of them posed the question of which is better: a survey from a well designed probability sample that is poorly executed or a survey from a non-probability sample that is well executed. This, as it turns out, is a theme that keeps popping up from session to session.

The first session was all about guidelines for conducting multi-cultural or multi-country research. It featured a set of guidelines developed via CSDI . I have more to say about them in about in a week or so once they are released and I have the link. For now the real point is that a very distinguished set of survey researchers including the likes of Tom Smith from NORC and Lars Lyberg from Statistics Sweden seemed to agree that a good survey is, in the words of Bill Blyth from TNS, 10 percent design and 90 percent process. To paraphrase Lars, there are so many opportunities to introduce error in questionnaire construction, translation, data collection, coding, data processing, analysis, and so on that the problems of sampling error begin to seem less important. At Statistics Sweden, for example, he discovered that errors in coding could lead to estimates in labor force participation to be off by as much as 40 percent.

This theme reappeared in a subsequent session that featured presentations about some of the major public policy surveys now going on around the world. In many of these countries it's hard to even think about have a high quality sampling frame from which to draw a representative sample. So researchers do what they can but mostly focus on consistency of execution. Good advice, I think.


“May the Weights Be With You”

The title of this post is courtesy of Alex Gage , one of our former partners and now a successful political consultant. I was reminded of it when I saw a piece by Howard Finemen on the Newsweek site in which he takes a shot at parsing the current polls of the presidential race. OK, so he's not exactly Bob Groves in his grasp of the methodological issues, but he does shed some light on the variety of ways in which a handful of experienced and respected polling organization practice the science of survey research. AAPOR does similar work in its attempt to unravel the disappointing performance of the polls in the 2008 New Hampshire primary.

One key issue that Fineman does not touch on is weighting. At an AAPOR Conference several years ago at which a number of well-known pollsters were invited in to talk about how they do what they do. I am a little fuzzy on the year, but the conference was in Norfolk, VA and the topic may have been the 1996 election. Many AAPORites have yet to recover from that evening and its revelation of the scandalous weighting procedures used in political polling. It was after this session that the name Zogby became toxic on the AAPOR listserv, and by the end of evening it was clear to all concerned that political polling is not science, but art.

 


Make of These What You Will

The June issue of Inside Research reports that an online poll by YouGov matched the final results of the May 2 London mayoralty elections exactly.   Not only was it right on, it was the only major poll to show Ken Livingston, the eventual loser at 47 percent, behind.  Everyone else had him winning.  I will spare you the specifics of YouGov founder Peter Kellner’s oozing comment on his willingness to help his competitors figure out where they went wrong.

The same issue had a short piece detailing just how long predictive markets have been around.  For the benefit of the uninitiated, this is an approach trumpeted by James Surowiecki in The Wisdom of Crowds arguing, among other things, that people are sometimes better at predicting the behavior of other large groups of people than they are their own.  In other words, results from a survey that asks people to predict the likely success of a new product may be more accurate than a survey of those same people that asks about the likelihood they will purchase the product.  In another variant, polls of casual experts who follow a certain topic can sometimes be uncannily accurate about future public behavior on that topic.  (Our own Theo Downes-LeGuin has been experimenting in this area with surprising or disturbing results, depending on your point of view.)  The specific item noted in Inside Research piece is a recent article in The Wall Street Journal (April 28) by Gordon Crovitz arguing that analysis of betting behaviors on election outcomes is a better predictor than political polls, and he apparently has some data to back that up.  (Unfortunately, WJS has yet to catch the wave and so you can only access their articles if you pony up for a subscription.)