Previous month:
October 2011
Next month:
December 2011

Posts from November 2011

Thinking or just plain stumped?

Last night I had dinner with an old friend who also is a world renowned and widely published expert on questionnaire design. We chatted some about what's happening in MR and I asked for his take on the industry's obsession with speeders, i.e., respondents we decide have answered questions too quickly. Or, more specifically, does a longer response time signal a more thoughtful and better answer? His first reaction was to suggest a U-shaped curve with the sweet spot in the middle. A really fast response might indicate no thought at all while a longer lag in responding may signal some problem in coming up with what the respondent thinks is an appropriate response. Maybe the question is confusing or the first answer the respondent comes up with doesn't fit the response options or the respondent just feels like his answer is not good enough, that it needs work before reporting it. Headscratch There also might be differences between questions that ask about attitudes where we want a top of mind response versus questions about behavior where more searching of memory might ultimately come up with a more accurate answer.

As we talked I was reminded of how many of the metrics we've come to use in online surveys as measures of response quality are crude at best. We are quick to delete respondents who answer "too quickly" or straightline when there may well be circumstances under which those are perfectly reasonable response behaviors. But few people ever talk about getting rid of slow responders even though the quality of their answers may be as bad or worse than those who answer quickly. I think this points to our tendency to assess quality by using those things that are easily measured. We can count grids, measure straightlining, compute completion times, etc. but we don't have much of a clue about how to measure the impact of poorly formed questions, answer categories that don't resonate or questions that ask about things respondents just don't care about. My friend does his share of litigation consulting and sometimes he can convince a client to go through a systematic questionnaire development process that includes focus groups, cognitive interviews, and a pretest. He finds that when they do this the results they get with online panels are much better than when they just sort of wing it. But this sort of systematic questionnaire development process is not used as often as it should be. To our detriment for sure, but for respondents even more.


What skills does a market researcher need?

My inaugural post on this blog was on August 28, 2005.  I started the blog as a way to share information inside my company.  In those days posts were almost always true to the tagline and focused on surveys.  But times change and today the MR industry is nowhere near as focused on surveys as it was six years ago.  RobertCrumbKeepOnTruckinWe now collect data in many different ways and from many different sources.  I used to believe that the training I got over 11 years working at one of the finest survey research organizations in the world (NORC at the University of Chicago) was the best possible training for a career in research.  That’s not true anymore.  At least not in MR.  So if knowing how to design and conduct high quality surveys is no longer the essential skill of a good market researcher then what skills do we need?  I think there are three.

The first is the ability to evaluate evidence.  By that I mean the ability to look at how the evidence, whatever its source(s), was brought together and see with a clear eye both its strengths and its weaknesses.  What’s missing?  What are the biases?  How well does it reflect the behaviors and attitudes of the target population?  Where are the red flags?

The second is the ability to understand what the evidence is saying.  How does it contribute to our understanding of the research problem?  How does it fit or not fit with whatever else we know or with other data we might have?  The old discipline of hypothesis formation and testing is not practiced as it once was but we still need to be systematic in how we go about analyzing evidence.

Finally, we need to be able to translate the results of our analysis into insights for our clients, that is, to recommend clear steps that if taken will lead to improved business results.   I think we have come to believe that this is as much art as it is science.

Judging from what I hear and see across the industry there is plenty of room for improvement in all three.  So I keep on bloggin’. . .


There you go again!

One hears lots of silly things said at MR conferences and one of the silliest and oft-repeated refrains is that you can't do surveys with probability samples any more. There are even those who say that you never could. As often as I get the chance I point out that that's total nonsense. Lots of very serious organizations draw high quality probability samples all the time and get very good results. The prime example here in the US is the Current Population Survey, the government survey used as the basis for calculating the unemployment rate each month. Pretty much everything that comes from Pew is based in probability sample surveys as are many of those political polls that we follow so breathlessly every four years.

The concept of a probability sample is very straightforward. The standard definition is a sample for which all members of the frame population have a known, nonzero chance of selection. Unless you have a complex stratified or multi-stage design it's a pretty simple concept. As long as you have a full list of the population of interest to draw on and everyone has a chance to be selected the resulting sample can be said to represent the population of interest. But there are some serious challenges in current practice.

The first is assembling a frame that includes the entire population you want to study. For example, because of the rise of cell phone only households the landline frame that used to contain the phone numbers of well over 90% of US households no longer does. So it has become standard practice to augment the landline frame with a cell phone frame to ensure full coverage of the population. Clients often can supply customer lists that do a good job of covering their full customer base and we can draw good samples from them as well. Online panels are problematic because they use the panel as the frame and it contains only a very small fraction of the total population.

The second major challenge is declining cooperation. While there are studies that show even surveys with alarmingly low response rates can produce accurate estimates, low response rates make everyone nervous, raise fears of representivity and call results into question. The Current Population Survey gets 90% plus and so we trust the employment rate, but that kind of response is very unusual.

There are other challenges as well but I think it's the deterioration of the landline frame and very low response rates that cause some people to think that probability sampling is no longer possible. Anyone willing to spend the time and the money will get very accurate estimates from a probability sample, better than anything they'll get with an online panel or other convenience samples.

As I have written numerous times on this blog, the lure of online has always been that it's fast and cheap, not that it's better. And depending on how the results are to be used the method can be just fine, fit for purpose. But sometimes the problem requires representivity and when it does probability sampling is still the best way to get it.


Boys and toys

The title of this post is an expression a friend uses to describe the apparent male affinity for gadgets of all kinds—computers, cars, mobile phones, power tools and consumer electronics in generl. I'm not saying that MR or even specifically the NewMR is a male dominated business, but nothing excites us more than cool new toys.

This came to mind when I read Jeffrey Henning's latest post over on the Affinnova blog about still another presentation about neuroscience. Now I am completely on board with the idea that emotions play a much larger role in consumer decision making than our methods generally have acknowledged, and that we need to find ways to evolve those methods to take that into account. But, as I hinted in a previous post, I have concerns about whether neuroscience, or at least our imperfect understanding of it, is the right solution. Neuroscience is difficult stuff, even for people who do it all the time and it's not at all clear that we really know how to accurately interpret the reactions of different parts of the brain to different stimuli.

So as I was reading Jeffrey's post I kept wondering whether wiring up focus group participants like Alex in A Clockwork Orange offered insights above and beyond what we have been doing the past 25 years with dial testing. Clockwork In dial sessions people may not be able to clearly express why they feel more positive or negative about an ad as it plays out but the squiggly line probably will tell you when they started to feel it and by how much. With a little probing they can probably tell you what in the ad triggered that feeling.

What's missing from the presentation that Jeffrey describes (or at least in his telling of it) is the controlled experiment that demonstrates why what we learn from one technique is better than what we learn from another. IMHO, too much of the NewMR is built on assertions that this is better than that but with too little data to make the case.


Amsterdam Syndrome

Back in September I asked my friend and colleague, Theo Downes-LeGuin, to write a blog post that summarized his reaction to the ESOMAR Congress.  He wanted to take some time to let it all sink in and has now sent me something.  Here it is.

After 2 days at the ESOMAR Congress in September, I must conclude that I am washed up, a change-averse has-been. Yes, there is evidence to the contrary. Though trained in classical survey methods I've spent my career with one foot in qualitative research. I adopted disk-by-mail methods early, before the internet (quaint!). I was an early proponent of online qual despite client resistance. I like to think that I contributed to many of now-standard approaches for maintaining data quality in web panel surveys. And of late I've been working on how we make online experiences more engaging for online insights providers (née respondents).

But over 48 hours in Amsterdam, I was told by enough speakers and delegates that to even think about, let alone discuss, the themes that have occupied me for the past 20 years is to consign myself to the Pleistocene Era. And if you hear something enough, you start to believe it's true. Call it Amsterdam Syndrome.

After a few weeks at home, the syndrome is wearing off. I don't know about you, but I get a bit churlish when lots of people start telling me that we're doomed if we don't do X (where X at the moment could mean abandon surveys, replace representivity with relevance, or gamify everything). The end is nigh, as the wild-eyed man with the sandwich board has been telling us for 50 years, and one of these days he'll be correct. But at the moment, more likely we're simply replacing one orthodoxy with another.

Our new orthodoxy hasn't quite taken shape yet, beyond the assertion that Market Research Must Change In Order to Survive – a statement so obvious and enduring as to be hardly worth repeating, at least no more worth repeating now than it was 20 years ago (yes, there was turmoil and risk of irrelevance then as well). Any researcher worth his or her salt is constantly exploring poking at sacred cows and improving the status quo. Sometimes we have to explore a little harder due to changing environmental factors, and some changes are bigger than others.

But a new orthodoxy will surely will take shape, and it will take its cues from business, not from the sciences. Most of the time business serves MR well as a model, both because our raison d'être is to serve other businesses and because we are ourselves businesses. But the model falls apart a bit when we are trying to judge how we should change in our own practices. Businesses are by definition opportunistic; they share very clear measures of success (growth and profitability); and in the current environment they are very responsive to the immediate environment.

Scientific principles, on the other hand, are really handy when it comes to change management. God knows, science is subject to its own orthodoxies. But in between those conflicts and upheavals, scientific principles and process provide a consensus basis on which design experiments and evaluate outcomes, all in order to decide which changes are worthwhile and which are rubbish. Which I believe is what we are paid to do for our clients…so why wouldn't we do it for ourselves?