Previous month:
November 2007
Next month:
January 2008

Posts from December 2007

The Latest Thinking on Panel Quality

Way back in October I spent two days chairing the ESOMAR Panels Conference in Orlando. Eighteen papers in all under four broad headings: Data Quality, Panel Quality, Proprietary Panels, and New Developments.

There was a lot of good stuff and some not so good, but one thing that really struck me is the gradual shift in thinking about the three main types of "bad respondents" that I and others have been talking about over the about the last two years.

First, there are the professional respondents who belong to multiple panels and do lots of surveys. More and more studies seem to be concluding that multipanel membership is not as problematic as we might have previously thought. Sure, you can find studies that point to some major demographic and behavioral differences but the overall plurality of studies seem to be finding few major effects. There are fewer studies of conditioning effects from taking lots of surveys but that work will come. Most of us would agree that fewer surveys are better than lots of surveys, but as a practical matter panel companies are going to continue to survey their members to the max as long as the demand is strong. So we need to get a better handle on how this might be impacting our results.

Second, there are the fraudulents, that is, people who misrepresent their qualifications either when they join the panel or when they are answering the filter questions in a survey. The panel companies (at least the reputable ones) seem to have embraced this as a serious problem that they need to own, and they are now happy to tell you how they are validating respondents at the registration step. As researchers we need to continue to police this and not hesitate to turn these people in to the panel provider when we find them in surveys. They are arguably the biggest threat to online research.

Finally, there are the inattentive, that is, respondents who complete quickly, don't really take the survey tasks seriously. Panel companies increasingly point to researchers as the ones who need to own this problem. Long, uninteresting questionnaires presented in formats that invite behaviors like satisficing are increasingly pointed to as the problem. This in turn is leading to a whole new industry-wide emphasis on "engaging respondents." The thinking seems to be that we can overcome long and uninteresting if we just present more creatively. More on that later.


“The Skills Breakthrough”

There is a really interesting article in the December issue of Research World co-authored by "holistic" research guru DVL Smith.  In it he sets out the five skills we need to train into our staffs for them to be able to deliver the actionable research required in today's MR environment.    The five are:

  1. The ability to integrate different forms of market intelligence evidence into an overall picture of the  consumer
  2. The knowledge to interpret "imperfect" evidence
  3. The ability to identify, value, and prioritize potent customer insights
  4. Being able to frame the choices for decision-makers
  5. Helping to make sure things happen

This is a tall order because it's a pretty substantial change from what people are taught both in academic survey research programs, even those with a strong MR focus. 


The MR Industry Works its Worry Beads

A couple of weeks back I attended the CASRO Data Collection Conference, not a bad place to get a feel for current thinking in the industry. Three things of significance seem to be on people's minds:

  1. Worries about data quality or more accurately what to do about it, continue to occupy people's thinking. (There is strong element of irony in this given that the issue has mostly been driven to the fore by folks in CPG who have been buying their research by the pound for decades. On the other hand, CPG is such a large share of the global spend in MR that what they say naturally gets a hearing.) Unfortunately, even though this buzz has dominated the industry for over a year there don't seem to be a whole lot of creative ideas for dealing with it. Mostly what you hear comes down to (a) convincing clients they need to pay more for research if they want quality, (b) shortening surveys to be more respondent friendly, or (c) launching a major PR initiative to convince the world we really are doing great stuff. Then again, there was one especially shrewd comment made by Connie Ruben, one of the conference co-chairs, to the effect that some of this PR has to go on in the context of the one-on-ones that occur all the time between clients and their suppliers. That quickly gets us to what I think is the key question: do most of the people practicing MR really have as strong a grasp as they need to have of the basic principles of survey research? Put another way, can they help clients make good design decisions based on empirical evidence of best practice or do they fall back on vague generalities with little hard evidence to bolster their arguments? Isn't part of the challenge here simply to become better researchers?
  2. We need more "engaging" online surveys to keep respondents in the game and responding well. This is the world of fancy interfaces and gadgets. Most of the research on this issue so far has shown that things like slider bars and mouse-driven sorting exercises eliminate some respondents because of technology needs, take longer for respondents to work with, and don't produce data that is significantly different from traditional designs. The technology issues are not as serious as they once were and maybe the day of these new interfaces is dawning but I have yet to see a piece of research that is convincing.
  3. Concerns about panel data quality seem to have subsided, at least for now. There are a number of things contributing here. First, panel companies have jumped all over the fraudulent problem and are doing a better job of ferreting them out at the panel registration phase. Second, the research on multi-panel membership and heavy survey taking has yet to reach a consensus around just how significant a problem these phenomena pose. Third, cleaning survey data for bad panelist behaviors like extreme satisificing and fraudulent responses on qualifying questions is becoming standard. That said, there are those who believe that when/if clients figure out that using panel data to size markets is dangerous business the panel quality problem will come back with a vengeance.

All of this continues to be very interesting and lots of fun to talk about. Whether we are making any progress is a tougher call.