The truth spoken here
Yes, but . . .

From the frying pan to the fire

The last issue of the International Journal of Market Research has an article by Mike Cooke and some colleagues at GfK describing their attempt to migrate Great Britain's 20 year running Financial Research Survey from face-to-face to online. Despite hard work by some of the smartest people I know in this business and after spending around £500,000 they ultimately concluded that full migration to online was not possible unless they were prepared to risk the credibility of the data itself. There simply were too many differences online that would disrupt their 20 year timeline. They ultimately settled on a mixed mode approach that has an online component but with the majority of interviews still done face-to-face.

Seeing the piece in print (I had heard the research presented previously at a conference a year or so ago) reminded me that much of the research one sees on this general topic of conversion of tracking studies from offline to online doesn't have a happy ending. Earlier this year in Toronto at an MRIA conference I heard a paper by Ann Crassweller from the Newspaper Audience Databank in Canada describing her test of the feasibility of converting a tracking study on newspaper readership from telephone to online. She compared results from her telephone survey to those from four different online panels. She was unable to get an online sample from any of the four panels that matched up with her telephone sample on key behavioral items, and the variation among the four panels was substantial. She concluded that at least for now, she needed to stay with telephone.

Fredrik Nauckhoff and his colleagues from Cint had a better story to tell at the ESOMAR Panel Research Conference in 2007. They compared telephone and online results for a Swedish tracker focused on automobile brand and ad awareness. Results mostly were comparable and where they were not the authors felt they were manageable. They did, however, sound a note of caution about the applicability of their results to countries with lower internet penetration than Sweden (81 percent).

I've personally been involved in a number of similar studies over the last few years, most of which I can't describe in any detail because they are proprietary to clients. One exception is work we did back in 2001 on the American Customer Satisfaction Index. We administered the same interview about satisfaction with a leading online book seller by telephone and to an online panel. To qualify for the survey the respondent had to have purchased from the online merchant in the last six months. We found few significant differences in our results. Additional experiments on this study with offline merchants have been less encouraging.

In 2003 we conducted a series of experiments aimed at the possible transition of at least some of Medstat's PULSE survey from telephone to Web. Despite using two different panels and a variety of weighting techniques we were unable to produce online data that did not seriously disrupt the PULSE time series on key measures. This study continues to be done by telephone.

In some cases, despite significant differences between telephone and online, clients still have elected to transition to online, either in whole or in part. In at least one instance of a customer satisfaction study a client felt that the online results were a more accurate reflection of their experience with their customer base. In another, the cost savings were so significant that the client elected to accept the disruption in the time series and to establish new baseline measurement with online.

What all of this suggests to me is that it is impossible to know in advance whether a given study is a good candidate or a poor candidate for transition to online. There is little question that online respondents are different in a whole host of ways—demographic, attitudinal, and behavioral—from the rest of the population and from the survey respondents we typically interview in offline modes. The key is to understand whether those differences matter in the context of whatever we are trying to measure in our survey. We can only learn this through empirical investigation, and even then, explaining the differences in our results can be frustratingly difficult.

Comments