More research on mobile

Regular readers of this blog (if there are any) may have noticed that I have been a social media no-show for the last couple of months.  This was not deliberate; it just sort of happened.  But now I am jumping back in and can’t help but wonder whether this NewMR thing isn’t like an American TV soap opera in that you can miss a whole bunch of episodes without losing track of the story line.

In that vein, I found myself checking out the latest issue of the online journal, Survey Practice.  This journal is an AAPOR invention designed to give an outlet mostly to practitioners without subjecting them to the rigor of preparing a full submission to Public Opinion Quarterly or the new AAPOR journal, Journal of Statistics and Methodology. Which is not to say that Survey Practice is the sort of lightweight fare we too often hear from the conference podium.  It’s not.  There are literature reviews, well-designed experiments, and good solid analyses.

The current issue is a case in point.  There are at least two articles the readers of this blog, especially if you are interesting in mobile, might find worth a read.  One is from some folks at Nielsen and looks at surveys on tablets.  They have some good data to show what many of us probably expect, namely, that when respondents use tablets to complete online surveys there are few ill effects, although the same cannot be said for smartphones.  The second, by a team from Olivetree and SSI, looks at in-the-moment data collection and finds that there can be a sort of Hawthorne effect when respondents are asked to report each time they eat a snack. I’ve not done justice to either of these studies so you should check them out on your own. 

I’ve long felt that innovations in research methods start with MR, going through a sort of entrepreneurial phase in which benefits are claimed and data presented to support the claims, but the “studies” seldom rest on a literature or acknowledge the work of others in the field.  This is followed by better-designed experiments from researchers who may be from MR, but have an academic bent that produces more solid research.  Finally, the academics jump in to bring the empirical and the theoretical together, providing a scientific basis for the new methodology.  We are not there yet with mobile but these two pieces from Survey Practice are starting to move us down that road. 

Whether this description of these two studies have whetted your appetite or not, I strongly recommend that you go to the site and register for alerts as new issues are posted.


Why does it work?

Now there is a question that we ought to ask more often. At last week's MRMW Conference we heard about a whole lot of different methods—some mobile and some online—for which their evangelists made great claims. And those claims generally focused on the great insights they can deliver about some generic ill-defined group, typically "people" or "consumers." It's not always clear about which people or which consumers we're learning about. Does it mean everybody or just somebody? If somebody, exactly who?

So around the end of the second day after still another presentation describing an approach to onli190512-186784-idunnone research that delivers incredibly accurate results for or an equally incredibly low price without all that messy sampling and weighting I asked the question: Why does it work? After a pause I got my answer: "Because we are very careful."

Say what you will about probability sampling but you have to admit that is has a theory underneath it, some scientific underpinnings and lots of empirical research to make a strong argument for why it works. Violate its key assumptions through incomplete coverage or high nonresponse and it may not work so well. The New MR crowd has the arguments more or less down pat (although a fair number of them seem not to have gotten the memo about sampling mobile phones now being SOP).

But If we are doomed to non-probability sampling methods then let's at least hold them to the same standard. Let's ask why it works. Let's ask how we are to know that the methods used to collect the data and analyze it lead to a reasonably accurate measure of the attitudes and the behaviors of the people the research claims to represent.

Let's start having those conversations. "It just works" should not be enough. That crosses the line from science to faith, and we all know what Mark Twain said about faith.


Storytelling comes to the conference podium

The first two presentations at the conference this morning have helped me to understand what it is that I'm finding so disquieting about this conference. What we are hearing is not the usual kStorytellingind of research conference presentation. What we are hearing are stories. No data. No experiments. No hypothesis testing. Just stories. Oral blog posts from the podium with some neat visuals by people who are smart and articulate. 

OK, so I probably am exaggerating some but it makes me wonder whether this is where the current emphasis on "storytelling" is taking us as an industry. And, of course, is that a bad thing? Are we really connecting the dots or just making unconnected leaps of logic to get to a view point? In my previous post I worried that we are too focused on "just the facts, ma'am" as a deliverable. We probably should be just as worried about wonderfully-sounding insights that are disconnected from those facts. 

 


Kids in a candy store

I'm at the Market Research in the Mobile World Conference in Cincinnati where the constant stream of cool MR mobile applications is off the charts. You can't help but be impressed with the creativity being brought to the task of using mobile to give us a whole new perspective on consumer behavior. And the people showing their wares here in Cincinnati all seem to be having a great time doing it. The excitement is palpable. Kidcandystore

But the most interesting session of the day to my ear was the client panel to which our host and MR impresario Lenny Murphy posed the question: "What's the gap between what clients need and what suppliers are offering?" The answers all seem to boil down to our failure to help them understand how all of this cool new stuff and the data it produces can be put to work in their companies. There were lots of specifics—boring reports that need to be rewritten, failing to help internal stakeholders integrate the data into their existing databases, not being clear about how new methodologies do what they purport to do, not understanding their business and industry, etc. It's certainly not the first time we've heard this from clients. And I've seen zero evidence so far that mobile helps.

Clients have been complaining for years that researchers mostly just deliver facts, or what they claim to be facts, and making real business sense of those facts is not a deliverable. For all the talk about the importance of storytelling, it's just another way to present the facts. Ditto for most of the cool video and animated deliverables being generated out of mobile. Clients want us to connect the dots from those facts to their business and they keep telling us over and over that we're not getting it done. I guess we're just having too much fun with data collection.