Previous month:
September 2013
Next month:
November 2013

Posts from October 2013

Pleeezz!

Today’s update from Research-live.com has this headline: Online trackers not optimised for mobile could 'compromise data quality.' It goes on to explain:

GMI, which manages more than 1,000 tracking studies, claims that online trackers that haven’t been optimised for mobile platforms may exclude this growing audience, which could lead to a drop in data quality, reduced feasibility and the possibility of missing whole sections of the required population from research.

Let me be clear. I don’t disagree that online surveys need to be optimized for mobile and that the numbers of unintentional mobile respondents (aka UMRs) is large and growing. But a warning from an online panel company that scaring away UMRs may be leading to a drop in data quality because of “the possibility of missing whole sections of the required population from research” just drips with irony.

Let’s start with the fact that online research, at least in the US, by definition is excluding the roughly 20% of the population that is not online. Research using an online panel of, say, two million active members is excluding about 99% of the adult population. As the industry has moved more and more to dynamic sourcing it’s hard now to know how big the pool of prospective online respondents is, but it’s a safe bet that that the vast majority of US adults are missing, and not at random.

Surely, if we have figured out a way to deal with the massive coverage error inherent in the online panel model, we can handle the mobile problem.

I suspect that the real issue here is feasibility, not data quality. Just as the now near-universal use of routers is about inventory management rather than improved representativeness. I wish that online panel companies would spend more time trying to deal with real data quality issues like poor coverage and inadequate sampling methods, but that’s only going to happen if their customers start demanding it.


Is market research for dummies?

A few weeks back I got myself in some trouble with some folks when I posted this piece criticizing the new book, Neuromarketing for Dummies, without having read it.  I also received some emails suggesting that the book is better than it sounds and that the authors are not pop science hacks, but people with broad knowledge of the subject matter and experienced survey researchers. I even got a very nice note from one of the authors asking that I read the book and let him know what I thought of it.

DummySo I went ahead, bought a Kindle version of the book, and have now read it.  My overall impression is that the authors indeed know their stuff and generally have done a nice job of organizing the material.  But the Dummies format was at the heart of my original critique and reading the book has not changed that view.  As the authors tell us early on, “we don’t take our subject matter or ourselves too seriously.”  And, like all Dummies books, it is designed so that you can jump in and out, focusing just on “the important parts” with the help of icons that draw your attention to tips, important points, and things to avoid.  There is even an online cheat sheet that boils it all down to a single web page.  What there is not is a list of reference that lets you dig into the content in more detail, although I understand one is in prep.  They also refer the reader to two other related books in the series dealing with neuroscience and behavioral economics, but I have yet to look at them.

There are some content nits that I could pick but that is not the point of this post. It is only to say that my initial criticism of the book stands.  I worry that it encourages a superficial understanding of what are pretty complex ideas, something that I think is all too common in contemporary market research.  If, as some argue, behavioral science has the capacity to transform much of market research, a point of view for which I have some sympathy, we need more than a superficial understanding of its basic principles and theories. The is subject matter that deserves to be taken seriously.

No offense Stephen, Andrew, and Peter, but I just wish you had written a different kind of book.


Thinking fast and slow in web survey design

I am a huge fan of Jakob Nielsen's work on web usability.  He has a post out this week--"Four Dangerous Navigation Approaches that Can Increase Cognitive Strain"--that puts web usability into a system 1/system 2 framework.  As I've said many times before, I believe that his research on web usaiblity has important implications for web survey design. 

In his post Nielsen offers evidence for a principle I have long aruged is important in web survey design: unfamiliar answering devices and complex instructions absorb cognitive energy and distract from the key task of simply providing an answer to a question. I'm not going to rehash Nielsen's full post here, but encurage you to follow the link and have a read for yourself.  You may want to pay special attention to dangerous navigation approach number four: "Fun" tools that become obstacles.