Previous month:
November 2012
Next month:
January 2013

Posts from December 2012

Lie down with dogs, get up with fleas

Brian Singh has a post over on GreenBook that gives us his take on a recent conference where a group of venture capitalists shared their views on the investment opportunities in MR as it grapples with its future.  I didn’t find most of what Brian reported to be surprising; it’s pretty much the standard stuff we are used to hearing about the “transformation of market research.” 

But the last part of Brian’s post really struck a chord with me.  He calls it “the creep factor.”  I would call it the dark side of big data, and at the moment it’s mostly playing out with mobile. At the heart of it is the assembly of massive amounts of personal data on all of us, too much of which is collected by offering some attractive service for free (often discounts on popular consumer produces) as a sort of head Stealing-data-300x200fake when the underlying business model is to amass large amounts of data on individual users and then monetize it via direct marketing.  To stay out of jail these services offer up long terms of use and privacy policies that they know the vast majority of us never read.

For people like me, who grew up and then old in the research profession, this weird twist on caveat emptor is at the heart of our misgivings about much of the NewMR.  Respect for respondents or participants or research subjects or whatever you want to call them was always at the center of everything we did.  That quaint notion is rapidly going the way of the landline telephone.

I am not so naïve that I don’t understand the need for MR companies to evolve in response to new technologies, new modes of social interaction, and an increasingly competitive marketplace.  My hope is that we can do so without betraying what I consider to be an essential part of the ethical foundation of the research profession.  Pete Cape once said, “We have allowed this industry to be taken over by venture capitalists and technology geeks.”  I hope he’s wrong.


Measuring the right stuff

A few weeks back I saw a post by online usability specialist Jakob Nielsen titled, “User Satisfaction vs. Performance Metrics.”  His finding is pretty simple: Users generally prefer designs that are fast and easy to use, but satisfaction isn't 100% correlated with objective usability metrics.  Nielsen looked at results from about 300 usability tests in which he asked participants how satisfied they were with a design and compared that to some standard usability metrics measuring how well they performed a basic set of tasks using that design.  The correlation was around .5.  Not bad, but not great.  Digging deeper he finds that in about 30% of the studies participants either liked the design but performed poorly or did not like the design but performed well.

I immediately thought of the studies we’ve all seen promoting the use of flash objects and other gadgets in surveys by pointing to the high marks they get on satisfaction and enjoyment as evidence that these devices generate better data. The premise here is that these measures are proxies for engagement and that engaged respondents give us better data.  Well, maybe and maybe not.  Nielsen has offered us one data point.  There is another in the experiment we reported on here where we found that while the version of the survey with flash objects scored higher on enjoyment, respondents in that treatment showed evidence of lack of engagement at the same rate as those tortured with classic HTML.  They failed some classic traps at the same rate.

A cynic might say that at least some of the validation studies we see are more about marketing than survey science.  A more generous view might be that we are still finding our way when it comes to evaluating new methods.  Many of the early online evangelists argued that we could not trust telephone surveys any more because of problems with coverage (wireless substitution) and depressingly low response rates.  To prove that online was better they often conducted tests showing that online results were as good as what we were getting from telephone.  A few researchers figured out that to be convincing you needed a different point of comparison.  Election results were good for electoral polling and others compared their online results to data collected by non-survey means, such as censuses or administrative records.  But most didn’t.  Studies promoting mobile often argue for their validity by showing that their results match up well with online.  There seems to be a spiral here and not in a good direction.

The bottom line is that we need to think a lot harder about how to validate new data collection methods.  We need to measure the right things.

 


Twitter and electoral polling: Not ready for prime time

Last night I chaired a panel at PAPOR on the general subject of social media and the 2012 election. The panelists were Paul Hitlin from PEW and Mark Mellman of the Mellman Group. The topic quickly narrowed to Twitter and the 2012 election. Both panelists had done substantial tracking of Twitter over the course of the campaign and showed multiple comparisons of Twitter volume, topics, and sentiment to polling results.

There were two main conclusions. Not surprisingly, while there were points of agreement between Twitter and the polls it was often weak, and at other times the two sources moved in opposite directions. Most striking was a Pew comparison showing a Romney surge on Twitter in the last week that seemed to say the he would win. (Hitlin started his presentation by claiming that if Twitter had been correct Ron Paul would be president!)

The second conclusion was more interesting. While the presenters agreed that Twitter did a poor job of tracking the campaign there were numerous disagreements between them, most of which they expected were rooted in their having used different software. Hitlin used Crimson Hexagon while Mellman used the Twitter-approved Topsy. So the results of both projects clearly demonstrated the degree to which the choice of software matters and the unpreparedness of many researchers at this point to make informed choices. No one in the room, including me, was prepared to debate the choice of one approach over the other. ESOMAR has tried to provide some help with its 24 Questions, but while we might know the questions to ask most of us seem ill-prepared to evaluate the answers.