Previous month:
August 2010
Next month:
October 2010

Posts from September 2010

Privacy continues be a major issue for our industry

This the first of what I hope to be a (short) series of posts from the ESOMAR Congress in Athens. The theme, not surprisingly, is Odyssey: The Changing Face of Market Research. The change part of that tile refers to the double challenge of (1) an onslaught of new methods and (2) the still difficult economic environment. More on that in later posts. For now I want to focus on still another truly pressing issue the industry faces and that is privacy. For most of the last decade the EU's privacy laws have more or less set the standard for the profession. Now looking at all of the changes that have come to pass over that decade the EU is taking a second look and its privacy laws may undergo significant revision. Depending on how all of this sorts out these revisions could make life a lot more difficult for researchers. There are five areas of concern:

  1. Online profiling vs. segmentation
  2. The definition of what constitutes a child
  3. Online tracking
  4. The definition of "consent'
  5. Statistical data vs scientific data--how are they defined

As always in these matters the interests of research are best served by helping governments understand two things. The first is that research is different from commercial speech (aka selling). Thus the importance of always resisting to cross that fine line between research and marketing. The second is continually displaying the strength of our industry's self-regulation. It's key here that professional and trade associations not only develop and publish guidelies, codes, standards, etc. but that they enforce them among both members and non-members alike.

Balancing risk and reward in survey incentives

The current issue of Survey Practice has an interesting little piece on the use of lottery incentives in online surveys. (Here I quickly point out that the correct terminology should be "sweepstakes" since there are legal issues around anyone but governmental entities running lotteries, but let's not get distracted by that.) In self-administered surveys like online the right incentive can have a significant impact on response rate. We all would like to pay an attractive incentive contingent on completion, but money always is an issue. Sweepstakes have long been a favorite of clients looking to boost response without spending a lot of money. My recollection of the literature on this topic is that sweepstakes are better than no incentive at all, but nowhere near as effective as paying everyone who completes.

The article describes an experiment to answer a question that I get asked all the time: is it better to offer one big prize or several smaller prizes? If I have $1000 to spend will I get more bang from that as a single prize or as four $250 prizes or even 10 $100 prizes. The answer from this particular research is the standard answer to virtually all methodological questions: it depends. The authors argue that the key is the economic circumstances of the target respondents. Professionals, who presumably are reasonably well off, respond at a higher rate when a single large prize is offered. Students, on the other hand, are more persuaded by the greater odds of winning a smaller amount of money.

This makes a lot of sense to me. I am embarrassed that I never figured it out on my own.

More signs of the Apolcalypse

Way back in the last century when Web interviewing was just starting to warm up more than one researcher worried that with the Web as a research medium anybody could create and field a survey. You didn't need a mail room, a call center, or a field force anymore. In 1999, those fears were made flesh with the launch of SurveyMonkey. Even the name seemed to mock us! Since then our fears about DIY eating our lunch have ebbed and flowed, but with the Great Recession they are once again in season as we worry about clients doing more and more "in sourcing" to save money.

My counter to the DIY argument has always been that real researchers add value in all sorts of important ways. Sampling, questionnaire design, and analysis come to mind as skills only acquired through training and experience. But I must confess that I'm not so sure any more. In any given week the trade pubs will deliver to your inbox article after article with research results that frankly are just crap. Bold pronouncements about major trends or new methods that tell you so little about how the research was done that you can't possibly judge its quality or the validity of its conclusions. Yet a careful reading between the lines of most will more often than not hint loudly that the claims being made are nowhere near valid given how the "research" was designed and executed. This stuff is not coming from some guy doing surveys from his basement; this is from companies that claim to be research companies being paid to do research by real clients. Maybe this is just honest incompetence and maybe all of the "real researchers" I've been banking on have found new careers that pay better, but it is more than a little disheartening.

It was in the midst of this near Howard Beale moment that I read Robert Bain's recounting of his month as a panel respondent.

No wonder clients think they can do just as good a job as we can.