Previous month:
September 2007
Next month:
November 2007

Posts from October 2007

Report from “The Market Research Event”

Theo Downes-LeGuin has sent the following email which I have copied and pasted below.

I returned last night from 3 days at IIR's "The Market Research Event" (thankfully they no longer render "The" in all-caps). I hope the following observations are useful:


  1. I've been to 5 years of this conference and while I didn't learn a lot as a researcher this year, it remains a strong marketing venue in my opinion. All of our respective client bases are well-represented at the conference. I believe we should consider continuing sponsorship next year and have other IGs represented (sponsorship usually provides 2-4 delegate passes). But I think we should avoid being an exhibitor. Our money is much better spend obtaining a speaking spot and providing leave-behind schwag at a sponsored meal.
  2. While client participation remains strong, vendors have discovered that this is a "target rich" environment so it felt to me as if the client/vendor ratio has gone from 50/50 to about 30/70. The primary impact of this on my behavior is that I focused on spending time with current clients and not pestering prospects, who already feel like raw meat left in the lion's den.
  3. Presentation subject matter and quality was all over the map, but I noticed that the sessions on internet research, internet panels and especially on proprietary client panels were packing in larger audiences than in the past.
  4. MRSI co-presented with e-Rewards on some experiments they've done around how to identify, trap and clean out underperforming panel respondents. The presentation gave me a bit of heartburn because it's stuff we've been talking about and acting on for years, but in my more generous moments I am just happy that the word is getting out.
  5. Socratic (Bill McElroy) gave a very good co-presentation with HP on the pros and cons of proprietary panels. Their main point is that affinity with the panel community is the best overall predictor of response rates and response quality in specialty panels. Incentives are effective only as an acknowledgement, not as a compensation for time. In a branded proprietary panel (e.g., Dunkin' Donuts panel of its customers), such affinity is easy to achieve. In an unbranded panel, the affinity must be built around shared characteristics of the panelist, not the sponsor, which is harder.
  6. Jimmy Wales, the founder of Wikipedia, gave an excellent talk that had very little to do with market research though at the end he said that "any company that is dependent on hierarchical distribution of proprietary information will face challenges in the coming decades" (which of course included the entire audience, both vendors and clients). This related nicely to an overview of Adobe's online research portal, which was very cool but completely lacked any form of Wiki-like information sharing or editing…placing the two talks in juxtaposition made it look as if Adobe had missed an obvious opportunity.

Let me know if you have any questions,


Advance Letters Still Work

There is an interesting article in the current issue of POQ that reports on a meta-analysis of the literature on the effect of advance letters on response rates in telephone surveys. The analysis shows that they continue to have an effect, increasing response rates on average at around eight or 10 points. Because the authors could not lay hands on all of the actual letters used in many of the studies their analysis of what works and does not work is somewhat incomplete , but in general I came away with two messages:

  1. Any contact with a sample member prior to the first call is a plus, even if it's just a postcard.
  2. Follow the rules set out by Dillman and you will be ok (but pay less attention to this guidance on design of Web surveys). Our own white paper on the topic has good guidance as well.

While the research genuflects to the recent focus on non-response bias it really can't address the key issue: do advance letters draw different kinds of people into the survey who otherwise might not participate and therefore increase representativeness or do they just add more of the same?

The 12 Killer Questions

David Smith, a UK researcher who has been a major advocate for what he has called "holistic research" has proposed an interesting list of 12 questions that clients might ask us or that we might ask ourselves to evaluate the quality of our work.  Here they are:

  1. To what quality level was the research conducted?
  2. What impact will the consumer evidence have on the final decision?
  3. Was the research design fit-to-purpose?
  4. What was the 'agenda' behind the study?
  5. To what extent did the research accurately reflect the target group being researched?
  6. To what degree has the interview medium affected the results?
  7. Did the interpreters of the research evidence really understand the critical 'research effects' at work?
  8. Was the appropriate analysis approach deployed?
  9. What level of 'business conceptualization' has been brought to the presentation of the consumer evidence?
  10. How well has all the available evidence been integrated into an impactful compelling story, and have the outcomes of the study been presented in a way that will influence the decision-making audience?
  11. To what extent has the data been presented in the context of what we know about this genre of evidence-based decisions?
  12. Has the tactical feedback from the latest research study on this topic been incorporated into a wider 'meta-analysis' in order to identify any over-arching strategic trends?

He explains each of these in more detail in the full article on the ESOMAR Web site.  Definitely worth a read.