Previous month:
February 2010
Next month:
April 2010

Posts from March 2010

Plus ca change

In the current issue of Research Business Report Bob Lederer muses on one of his favorite topics, online panel quality, and, at the risk of oversimplifying, seems to say that after lots of industry-wide soul searching it's now time for some action. He concludes by saying, "I suspect that 2010 will be all about tests and adoption of solutions that breathe reliability, replicability, and added value in to an infant (decade-old) research mode that, to those paying attention, had deeply serious shortcomings."

Had? If we have learned nothing else over the last five years it ought to be this: the online panel model is deeply flawed in both theory and practice. Like my allergies, they can be controlled but they never can be fixed.

Let's talk theory first. Well, there is no theory. The arguments are all empirical: it works. The methodology has been legitimized largely by anecdote and the endless repetition of, "It works." No underlying scientific principles have been enunciated or testable theories proposed. Without theory we can never be sure when it will work and when it won't. A tiny handful of people recognize the flaws and are trying to apply a broader set of techniques for working with nonprobability samples that have been developed in disciplines outside of survey research. I hope they come up with something. But the vast majority of practitioners in MR treat panel sample just like it was a probability sample drawn from a high coverage frame. It's not. It's a tiny slice of the population that we just don't understand very well at all. The notion that they are representative of the broader population is just plain silly.

And how about practice? For at least five years we have been talking about four main problems:

  • People sometimes create false identities when they join panels and misrepresent themselves in order to maximize survey opportunities.
  • People sometimes rush through surveys and don't make an honest effort to answer thoughtfully.
  • People will sometimes take the same survey more than once, or worse yet, develop bots that simulate a respondent and take the same survey many times over.
  • The experience of being on a panel and taking lots of surveys over time can change how people respond.

We are told that there are solutions for all of these, but too often the solutions themselves just introduce more problems. For example, it is rapidly becoming a standard for a panel company to "validate" a panelist's identify by bumping his or her particulars up against one of the big marketing databases like Acxiom or Experion. But these databases fall well short of universal coverage of the population and tend to miss people who don't have credit cards or bank accounts. And so real people are rejected and more bias introduced into the panel. Worse yet, solutions that are proposed are then ignored.  For at least the last two years we have known that simply collecting a respondent's IP address and easily-retrieved information from his or her browser can help us identify duplicates. Yet in the past week I've seen two studies with significant duplication in samples from well-established panel companies that claim they use digital fingerprinting to guard against just this sort of problem.

So I think the notion that the online panel paradigm can be "fixed" is fanciful. The essential problem is that the goal our clients have set for us (faster and cheaper) is fundamentally at odds with what we pretend to deliver (accuracy and validity). And it didn't start with online. MR has been cutting corners for decades to make it faster and cheaper. Exhibit A: Quota sampling. We work in a competitive environment where the lower price wins more often than not and the buyer doesn't always understand what he's buying. Another lesson of the last five years: that dynamic is not going to change any time soon.

So regardless of their problems, online panels are not going away. Bob hopes that in 2010 we will see more tests and the actual adoption of some of the solutions now on the table. So do I. But there is a bigger challenge that I see no sign of the industry stepping up to. Here I quote Lyndon Johnson who once said, "Boys, I don't know much, but I know the difference between chicken shit and chicken salad." We might take that to heart. What's been missing so far in all of the discussion of panel quality is a frank admission of what online is not. The panel quality solutions are fine, but they don't replace the need for us to do a much better job of conditioning the conclusions we draw and the advice we give our clients on the quality of the evidence at hand.


You can thank me later

Back in January I commented on a piece by Chang and Krosnick that had been published in Public Opinion Quarterly.  Shortly thereafter I got an email from Michelle Rawling at POQ telling me that they had unlocked the article so that non-subscribers could get access to the whole piece.  Here is the link.  She also reminded me that their annual special issue for 2009 is also available to non-subscribers. The 2009 topic was the 2008 election.

Read and enjoy.


Mobile Research Conference 2010 – Final Thoughts

Reflecting back over the last couple of days I'm finding four takeaways. But first, I think we need to do a better job of distinguishing between what one presenter called "the audio channel" versus other forms of mobile data collection, mostly by Web. It seems that when most people say "mobile" they mean the latter, but we could benefit by cleaning up our terminology a bit. In this post, I take mobile to mean everything other than interviewer administered surveys.

  1. I came believing that mobile is still a niche method and nothing I heard changed my mind on that score. It was especially telling, I thought, that a panel designed to talk about ways to get researchers to take mobile more seriously didn't come up with much, even under Tim Macer's expert leadership. Tim showed us results from his annual industry survey that put the percent of data collection being done by mobile at two percent. IMHO that's not going to grow significantly any time soon. It's just too limited as an interviewing platform and the penetration of Web-enabled mobile is on a par with Internet in the US 15 years ago.
  2. It may be a niche but it's a very cool niche. There are just some really neat things being done and those are things that fit well with the ethnographic research that has been so popular the last several years. I see lots of application there but this in itself is something of a niche.
  3. Mobile is still more about the technology than it is about research. It's still very much in the evangelical stage. Nonetheless, there will be agencies who will sell it aggressively and clients who will be dazzled by it, just not very many of them. One hopes that we learned something in the over hyping of online but that probably is a false hope.
  4. The mobile marketing industry is seeing mobile less as a marketing medium per se and more as a connector between advertising media—offline and online. Maybe we need to take note of that and think about whether there is a similar approach for our industry. Unfortunately, I'm not smart enough to figure that one out.

All that said, it's pretty clear that while we may not be leveraging all of the cool features of today's mobile devices on a broad scale anytime soon we surely will be making a lot more use of the core functionality. The best part of mobile is still the telephone part.

All that said, kudos to Globalpark for putting this together.


Mobile Conference is done!

Now for the afternoon session. Tim Macer is chairing another panel. He has set the bar pretty high in the last panel so it will be interesting to see if he can clear it again. His panel is Richard Windle (again); Nick Lane from Mobile Squared; Linda Neville from Coke (again); and Tanja Pferdekamper from Globalpark (again). Their topic is getting MR to take mobile more seriously. This is why I came.

People are mostly talking about specialized functionality that mobile phones have and the cool things you can do with them. But it's still niche stuff. Now Linda is giving us the voice of the client and need for confidence. What is the respondent experience? Are the data reliable? How does mobile fit with the other stuff the client is doing to understand the same problem? Is it filling a gap? What does it add? And just the fact that it's new and different can be a plus with some clients. It's shiny and so 21st century! Nick now is reminding us that it's still tough to do things with mobile Internet because it still has penetration issues. SMS is another matter; everybody uses it, at least in the UK. Texting is widely practiced but it's very difficult for data collection. So maybe it's a waiting game until costs go down and Internet penetration goes up significantly. He further argues that we need to learn to do short surveys of maybe 2-3 minutes if that is going to work for us.

Tim is now showing us data from a survey he does every year of MR firms worldwide. Key findings:

  • Two percent of revenues are coming from mobile and 47 percent of that is via Web.
  • Most respondents in his survey (just north of 50 percent) see it as not useable outside of certain niches but 10 percent see it as viable "as any other method."
  • The range of benefits cited by his respondents starts at convenience for respondent and then falls off to "moment of truth" raising response rates, reaching certain kinds of people difficult to reach by other methods.

Now it's open to the audience for questions and ideas. You can feel the brain cells all over the room straining to come up with some major insights, working really hard on the issue, but in my humble opinion it's all niches and not very exciting niches. A bunch of anecdotes and some maybe this and maybe that.

Unfortunately, a crisis back in the office pulled me from the conference and so I missed two presentations—one by Emmanuel Maxi from evolaris and the other by Ingvar Tjostheim from the Norwegian Computer Center. Based on jumping in and out both looked to be good reports on research projects using mobile with lots of interesting data. I'm sorry that I missed them.

At least I am back in my seat to hear from Tom de Ruyck from Insites who is going to talk about using Twitter for MR. (And here I should note that I follow Tom on Twitter.) He has started by giving us stats to clearly show that Twitter is "small" and "niche." He's not making claims about representivity, in fact, jus the opposite, giving examples of how you can design and do research with Twitter. The specific project he is discussing is their Ultimate Twitter Study which you can get lots of detail about on the Insites Web site. Mostly it's a tutorial about Twitter and the people who use it and he is doing a nice job of orienting us to the ins and outs of the Twitterverse and its inhabitants. It doesn't have much to do with mobile, but it's interesting nonetheless. They key is learning to harvest what's there and turn it into insights for our clients. Not an especially new story but nicely done.

Sabine Stork is up next to talk about brand emotions, presumably monitored via some mobile devices. Unfortunately, I have to take an important call and am missing it.

Now we are hearing from Nicola Doring from Ilmenau University of Technology who is going to talk about the psychological aspects of interviewing by mobile phone. Right off she makes the point that we probably can make more use of the "audio channel" than other forms of interviewing. She sees three factors that go into the psychological aspects of the interaction: the medium, the individual, and the situation. These all interact to create a certain psychology in a mobile exchange. We need to realize that for a lot of people it's not just a phone. It can be scary and one more example of information overload. For other people, it's their best friend, a multipurpose device, a toy, and constant companion. What it is used for and who or what its being communicated are also key to how people react to it.

This would seem to me to not being just applicable to mobile. People also have different attitudes and anxieties about regular, landline phones. Granted, the emotions may be more complex with mobile because of all of that functionality but this line of inquiry might be generally of interest for people who do telephone research. I don't know how much it's been looked at in the telephone survey literature, although I'm not hearing all that much in this talk that helps me to understand how the survey data might be impacted.

Of course, we all believe that the setting in which a phone interview is conducted is key, even though research presented earlier at the conference did not find as much variation between landline and mobile phone situations as we tend to expect. Perhaps that's because people in awkward situations either do not take the call or decline to be interviewed.

The final presentation is Malte Friedrich-Freksa from YOC. He's going to talk about the implications of application use for mobile research design. He starts by pointing out that application download on a broad scale is a new and different behavior largely brought on initially by the iPhone. This is fundamentally different from surfing the Web with a mobile device and may have implications for how we recruit people to do surveys on their mobile devices. He has run through all of the ways in which people get recruited to surveys and then complete them. They all involve a push of an invitation and then various ways to actually complete the survey. Then I confess, I lost the thread, but I think one key takeaway is that response rates were better for mobile surveys than taking people to the Web for an online survey. But I confess, I'm not sure. Maybe I've become a little dense after two days of listening to over 20 presentations.

Finally, I recognize that these posts have a rambling character to them. After this all sinks in I will post a brief summary of what seem to me to be the main takeaways from what has been an interesting couple of days.


Mobile Conference Morning Session Finale

A new group of presentations is starting with the first presenter being Klaus Dull from Pretioso. His topic is sales force integration in pharma. He has started by talking about the challenges:

  • Standard platforms are elusive. Blackberry is something of a business standard but we are seeing more and more Android-based devices.
  • Connectivity is key but it often is elusive (in different locations and inside buildings) and likely will continue to be so for the near future.
  • Usability is a problem. Devices and apps vary in how easy they are to use.
  • Integration with other corporate systems is also important but often lacking.

All of these problems need to be dealt with if one is going to do mobile B2B research. It appears he has now lapsed into a sales pitch for their mobile-based forms product that solves some but not all of the above problems. He is getting deep into database and standards issue but it's not at all clear to me that there is any real research application here, just straight sales force automation. Now he is into another not very crisp demo. It seems to be a railway survey. Not clear what this has to do with sales force integration in pharma. I am feeling confused about this purpose other than to show us his system. I've tuned out.

Now another panel, this one on the implications for mobile of the decline in landline use. The panel has Marek Fuchs (again) and Richard Windle from IPSOS. Tim Macer from meaning, ltd. is moderating. Tim has started by overviewing the impact of wireless substitution in the US. He also has shown us how CATI has been declining over the last five years. So two trends: declining use of landline and declining use of CATI. Are these related? What is the role for mobile?

Marek is starting by talking about coverage bias in landline telephone surveys. His data mostly come from the Eurobarometer. No surprise that the roughly the same pattern we see in the US is playing out in almost the same way across the 27 European countries—declining landline use and increasing mobile use. As he points out, this would not be a big deal if the two groups where not very different from one another. He also has introduced a relative coverage bias measure across the 27 EU countries in the Eurobarometer. It does a nice job of showing the levels of error we are likely to get on selected characteristics if a study only uses landline. This is a neat measure to drive home the impact of coverage error. He is very methodically walking us through the possible adjustment techniques and my guess is that the goal is dual frame as the only viable solution. And now that's where we are. But he goes on to hypothesize that in the not too distant future it may make sense to do single frame design, but that design being the mobile frame! We still will get landline people but also pick up the mobile only people. This may happen very quickly in countries like the Czech Republic where mobile phones are now in the 90s.

Richard Windle is elaborating on dual frame designs and getting really pure by interviewing not just mobiles but screening for mobile only. This can be serious money but as he points out the real differences are mobile only versus everyone else. He is showing a number of different surveys in which getting mobile-only people is essential. Students, at the moment. Try them first by oline via email, then paper and pencil, but then mobile.

What we have had here is a panel of two guys very schooled in the principles of survey research who view the problem from a scientific perspective who have been brought to the stage by a moderator who also gets it. It's not sexy and innovative like other stuff we have heard, but it has the feeling of the heavy lifting some in the industry have to do in order to deliver representative research.

In the Q&A Paul Lavrakas is responding to a request from the presenters to add some US flavor and Paull is obliging. He is detailing all of the obstacles to calling and interviewing on mobile which have been sort of glossed over in the presentations. These include sensitivity to the respondent cost, lack of geographic detail, differential refusal rates by age, not automated dialing, etc. It continues to be a very tough problem.