Previous month:
December 2008
Next month:
February 2009

Posts from January 2009

Toronto in January?

I've just come back from Toronto where I gave a talk at NetGain 3.0, a one-day conference put on by MRIA. As the title suggests, the focus was online research and the presentations covered all of the usual ground that conferences like this cover. Now I don't mean that as a knock. I think it's good news that the issues are being widely discussed in all sorts of venues. There may not be a whole lot of new solutions being proposed but at least people are increasingly aware of the problems the industry and clients are wrestling with.

The conference was opened by Pete Cape from the UK and SSI. Pete has been a major voice in the ongoing debate. He took the group through an exercise that quickly exposed that we are in industry of amateurs with little background or formal training in market research. Most people seem to have just stumbled into the business, and that's not just true in Canada. It goes a long way toward explaining why we struggle with many of these methodological issues. Bottom line: as an industry we too often don't really understand what we are selling or the validity of the claims we make for it.

Next up was John Wright, a political pollster from Ipsos-Reid. His talk was equal parts bragging about how accurate their telephone polling has been, presenting lots of data "proving" that online can be just as good as telephone polLing if it's done right, and railing at organizations like MRIA and AAPOR for their intransigence around the reporting of margin of error statistics for online studies. The truth is that political polling is one arena where online has been shown to work pretty well, although the art of political polling is arcane enough that we should probably not infer much about other kinds of research. The railing against MRIA and AAPOR was Exhibit A in Pete Cape's argument that research training is desperately needed in our industry. I happened to be sitting next to the Standards Chair for MRIA and we agreed that John's quarrel was not with MRIA or AAPOR but with the guy who invented the margin of error calculation with its problematic assumption that you had a probability sample.

Next up was a paper by Anne Crassweller that she had also presented in Dublin at the ESOMAR Panels Conference. It's one of those studies chronicling attempts to move a long-term study online and failing to do so because the topic—newspaper readership—is to some degree correlated with online behavior. This would seem to be a classic example of where online does not fit the purpose of the research.

Then came what I thought was the best presentation of the conference y Barry Watson from Environics. These guys build population segmentation models based on attitudes and values. Barry presented some data comparing three online panels to the general US population. A key segment way overrepresented in the panels is what they call "liberal/progressives." The underrepresented segments included groups they call "disenfranchise and" and "modern middle America." To really understand the implications one would have to dig deeper into the segment composition but this approach of trying to understand the attitudinal and behavioral differences of online panelists versus the general population strikes me as very important and generally missing when people make claims of "representativeness." Mostly the industry as expressed these things in demographic terms which really are somewhat meaningless in this context.

Barry also gave us the best quote of the conference: "Bias is only a problem when you don't know what it is."

The afternoon was less interesting, even with me kicking it off. My main message: let's stop talking about representativeness and instead focus on understanding bias and how it relates to the business problem we are studying.

Next we had the obligatory argument for "eye candy" to increase respondent engagement and lots of data to show just how widespread social desirability bias can be. And there was a pitch from the RFL people about their "pillars of quality."

When it was all said and done I found it not a bad way to spend a day. I got some fresh perspective and a chance to rant a bit which is always welcome.


Tardy report on last November’s “Research Industry Summit “

Shame on me. Way back in November Colleen Carlin attended a conference in Chicago and dutifully wrote up a report for me to share. Then somehow it got lost in my inbox. Imagine that! In the vein of better late than never here is her report. (As a footnote, this is the same conference referred to in this earlier post.

The Research Industry Summit: Solutions that Deliver Quality Respondent Quality

The theme of the summit was around quality data. Most sessions covered the quality issue as it relates to Web panels. The technical issues (cheaters/duplicates) that have been at the forefront of the quality movement (as it relates to panel data) have faded into the background as a focus on respondent motivation took front stage. The general idea seems to be that respondent engagement is synonymous with data quality. Several suggestions of how to engage respondents were put forth by different panel providers. Greenfield Online is suggesting that the use of flash programming produces more interesting surveys and this will lead to higher levels of respondent engagement. They presented results from one (yes just one) experiment where they randomly assigned people to either a 'traditional' survey or a survey with flash programming. The survey with flash programming was deemed to be more engaging based on the following outcomes: on average, the time to complete the flash survey was a minute less than the traditional survey; the flash survey had a higher response rate and lower dropout rate. What was not examined was the issue of bias. Does the use of flash programming introduce respondent bias into the data set? Not all computers or data connections can handle the flash programming, are these respondents somehow different on the attributes of interest? More research is needed to determine the efficacy of advanced visual (i.e. flash) techniques on data quality.

Another idea for engaging respondents models itself on popular social network sites, like Facebook. Toluna is the first panel company to embrace the concept of social networks as a way to engage panelists. The 2.4 million Toluna panel members can now interact with each other via an interface that looks very similar to Facebook. They can poll and debate each other. Mike Cooke pointed out a potential problem with this model is participatory bias and conditioning effects. How will the interaction between panel members affect their responses to surveys? An employee of e-rewards told me that everyone in the panel industry is closely watching the success or failure of this concept.

Ali Moiz of Peanut Labs sees the synergy between social networking sites and panels a bit differently. Through tools like Facebook Connect he thinks that panel providers and clients can begin to verify the identity of respondents in addition to mining their profiles for additional data. With permission from respondents, a company could gain access to Facebook profiles and use this information to validate that someone is who they say they are (i.e. a technology decision maker). The problem I see here is that it is probably about as easy to set up a false Facebook profile as it is to misrepresent oneself on a Web panel.

A presentation from a client in the financial services industry highlighted the need for caution when changing the mode of data collection for existing research from phone to web. Research done over the phone was compared to results from three different web panels. Product demand ratings between the phone data and data from two of the panels were statistically identical. A third panel twice yielded results that were over 20 percentage points higher than what was reported on the phone or the other two panels. A closer examination of the third panel revealed that it was a newer panel and thus had less seasoned panel members. All past web studies were aggregated and meta data about respondents added (i.e. how many surveys completed, tenure on panel, etc). The findings indicate that more seasoned panelists and those who take more surveys give lower product demand ratings. A link to actual purchase behavior showed that these lower product demand ratings were much closer to actual purchase behavior than the other results. These findings call for the need to include panelist behavior as a demographic that can be included in analysis.

My conclusion is that we need to be focused on ways to engage panelists, but not in a 'one size fits all' manner. Panelists, like people, are motivated in a multitude of ways and we need to figure out a way to include techniques that will appeal to nearly everyone. We need to be cautious and carefully test different techniques proposed to increase engagement and determine how the data are impacted. Primum non nocere.


ISO Cometh

It looks like ISO certification finally is coming to US market research.

By way of background, in 2006 the International Organization for Standardization published a new standard, ISO 20252, for "Market, opinion, and social research." The primary impetus behind the standard was to facilitate global market research by creating a consistent standard worldwide. Equally important, in my view, is the standard's goal of "encouraging consistency and transparency in the way surveys are carried out, and confidence in their results and in their providers." The standard includes a set of definitions for the key terms used in research, requires that every step in the research process be standardized and documented, and that these materials be shared with clients on request. A key motivating principle of the standard is that standardization of processes and consistent application of them in research projects will lead to improvements in research results. Equally key is a requirement that a firm submit to a certification and ongoing auditing process to ensure that it is operating consistently within its procedures.

The industry's response worldwide has been mixed. Some countries—most notably the UK, Australia, and Mexico—have embraced it quickly and on a fairly broad scale. Other countries, like Canada, have used the standard as the basis for their own process requirements and created a sort of "ISO lite" outside of the ISO process. US companies generally have been cool to the standard, even though the Technical Committee that created the standard had US representatives who participated in its development. I confess that from time to time I was part of the process by reviewing some drafts and participating in committee meetings. Further, I admit to being an ardent ISO supporter.

Why the resistance in the US? I think there are three reasons.

First, some people are legitimately concerned about the cost. Not only do you have to pay someone to conduct the certification process, you also need to put a lot of time and effort into standardization, documentation, and ongoing monitoring. For smaller companies in particular, this could be a significant cost burden.

Second, many companies in the US were forced to do some sort of ISO 9000 certification in order to work with certain industries, most notably, the auto industry. The 9000 series is essentially a manufacturing standard that does not translate all that well to a service industry like MR. Companies forced to do 9000 often found it expensive and not especially useful.

Finally, I think there is significant misunderstanding of the 20252 standard. There are those who say that it sets the bar too low in what it requires and others who claim that it's not specific enough to be useful in promoting quality research. There is some truth here. One major problem the Technical Committee that developed the standard had to overcome was the broad variation in practice across the roughly 20 countries involved in the discussion. But the real key here is to understand that ISO 20252 is not a quality standard per se. Rather, it's a service standard that stresses the need for standardization and transparency, especially relative to clients. It may not be as precise as some people might like it terms of specifying exactly how research should be done. However, it requires that companies tell clients exactly how they are doing the work and leaves it to clients to judge whether the quality of the resulting research meets their need.

The big news in the US is that CASRO has formed a task force to study implementation. I think this is long overdue and I am excited to see it moving forward. Expect to hear more about ISO in the coming months, both here, in industry communications, and at conferences.


Increased mobile Web use triggers Web surveys on the move

You have to love the hype. Or maybe not.

The title of this post is taken from an email I got this morning from a company in the UK that, you guessed it, wants to do surveys on Web-enabled phones for us. It goes on to say, " According to a November report by the BBC, there is a notable 25 percent increase in people accessing the web from mobile phones. This compares to a mere 3 percent growth in internet access from desktops. The report predicts that this new wave of mobile access to the web will increase even more in 2009." Does this mean that more people are now accessing the Web from their mobile phones than from desktops? Not exactly. One missing fact: the percent of mobile phone users who use their phone to access the Web.

I am reminded of another bit of news in this morning's email: Internet penetration in China increased by 43 percent last year bringing the current level to 22 percent. Big increases on a small base are misleading.

But back to the mobile story. As I reported back in November there was interesting commentary on this both in a paper at the ESOMAR Panels Conference and at the follow-on Online Forum. The country at issue there was Germany but it's hard to imagine the situation is much different elsewhere. Surveys on mobile phones have three major problems:

  1. While more and more phones are Web-enabled, people make limited use of the functionality because it's just too expensive. In Germany, for example, roughly two-thirds of mobile phones are Web-enabled but less than 15 percent of mobile users actually access the Web from their phone.
  2. There is a significant bias toward young people, and I mean really young people with a heavy dose of teenagers.
  3. The interface is very challenging. Just porting over what we are used to doing on the Web is the road to failure. Questionnaires need to be dramatically shorter and presented much differently on the small screen.

What troubles me most about this is the way in which hype overcomes good judgment. The imperative to innovate with technology in MR has created a frenzy of new applications surrounded by hype and false claims that I think confuse clients and ultimately devalue what we do. I am fond of citing Colm O'Muircheartaigh's definition of error in surveys: "work purporting to do what it does not do." The issue here is not just one of survey error. The issue is basic business ethics.