A couple of weeks back I attended the CASRO Data Collection Conference, not a bad place to get a feel for current thinking in the industry. Three things of significance seem to be on people's minds:
- Worries about data quality or more accurately what to do about it, continue to occupy people's thinking. (There is strong element of irony in this given that the issue has mostly been driven to the fore by folks in CPG who have been buying their research by the pound for decades. On the other hand, CPG is such a large share of the global spend in MR that what they say naturally gets a hearing.) Unfortunately, even though this buzz has dominated the industry for over a year there don't seem to be a whole lot of creative ideas for dealing with it. Mostly what you hear comes down to (a) convincing clients they need to pay more for research if they want quality, (b) shortening surveys to be more respondent friendly, or (c) launching a major PR initiative to convince the world we really are doing great stuff. Then again, there was one especially shrewd comment made by Connie Ruben, one of the conference co-chairs, to the effect that some of this PR has to go on in the context of the one-on-ones that occur all the time between clients and their suppliers. That quickly gets us to what I think is the key question: do most of the people practicing MR really have as strong a grasp as they need to have of the basic principles of survey research? Put another way, can they help clients make good design decisions based on empirical evidence of best practice or do they fall back on vague generalities with little hard evidence to bolster their arguments? Isn't part of the challenge here simply to become better researchers?
- We need more "engaging" online surveys to keep respondents in the game and responding well. This is the world of fancy interfaces and gadgets. Most of the research on this issue so far has shown that things like slider bars and mouse-driven sorting exercises eliminate some respondents because of technology needs, take longer for respondents to work with, and don't produce data that is significantly different from traditional designs. The technology issues are not as serious as they once were and maybe the day of these new interfaces is dawning but I have yet to see a piece of research that is convincing.
- Concerns about panel data quality seem to have subsided, at least for now. There are a number of things contributing here. First, panel companies have jumped all over the fraudulent problem and are doing a better job of ferreting them out at the panel registration phase. Second, the research on multi-panel membership and heavy survey taking has yet to reach a consensus around just how significant a problem these phenomena pose. Third, cleaning survey data for bad panelist behaviors like extreme satisificing and fraudulent responses on qualifying questions is becoming standard. That said, there are those who believe that when/if clients figure out that using panel data to size markets is dangerous business the panel quality problem will come back with a vengeance.
All of this continues to be very interesting and lots of fun to talk about. Whether we are making any progress is a tougher call.