A bad survey or no survey at all?

For a whole lot of reasons that I won’t go into online privacy suddenly is front and center, not just in the research industry, but in the popular press as well. The central message is that people are “concerned,” but about what exactly and by how much, well the answers there are all over the map. One of the few clear things about this whole debate, if that’s what it is, is the ongoing misuse of online surveys to describe what is going on.

I am hard pressed to think of anything sillier than using online surveys to help us understand attitudes about online privacy. Think about it. You have a sample of people who have signed up to share their personal behavior, attitudes, and beliefs in online surveys. What in God’s name could possibly make us think that these online extroverts, this sliver of the population, could possibly represent the range of attitudes about online privacy among “consumers” as generally alleged?  MisinformationIf ever there was an example of an issue where online is not fit to purpose, this is it. Yet these surveys are churned out weekly, generally to serve the commercial interests of whoever commissioned them, and often widely cited as some version of the truth.

To quote H. L. Mencken, “A newspaper is a device for making the ignorant more ignorant and the crazy crazier.” Sometimes it feels like online surveys serve a similar purpose.

 


Thinking fast and slow in web survey design

I am a huge fan of Jakob Nielsen's work on web usability.  He has a post out this week--"Four Dangerous Navigation Approaches that Can Increase Cognitive Strain"--that puts web usability into a system 1/system 2 framework.  As I've said many times before, I believe that his research on web usaiblity has important implications for web survey design. 

In his post Nielsen offers evidence for a principle I have long aruged is important in web survey design: unfamiliar answering devices and complex instructions absorb cognitive energy and distract from the key task of simply providing an answer to a question. I'm not going to rehash Nielsen's full post here, but encurage you to follow the link and have a read for yourself.  You may want to pay special attention to dangerous navigation approach number four: "Fun" tools that become obstacles.


New survey says everybody has a price

Like many people in this business I've been thinking a lot about data privacy these last few weeks. So when I see a headline from Research-Live come into my email saying, "Half of consumers willing to share their data, says survey," I wonder what's up because it doesn't quite gel with other data I'm seeing. On close examination it's only 45% and there are other hedges as you go down through the piece.  Most importantly, the right verb for the headline probably is "sell" rather than "share." It turns about to be not terribly earth-shattering despite noble attempts by the sponsor and spokesperson for the company that did the work to make it sound special.

The real issue for me is not the numbers; it's whether I should believe any of what this piece says. I would like to know at least something about how this research was done beyond the N and the countries where people were interviewed. Was it online? How was the sample drawn? Who provided it? How were the questions worded? Was their weighting? And so on. I spent a few minutes searching the web for more info, but all I got was more links to the same unhelpful press release.

SalesmanI don't mean to be singling out the good folks at Research-Live. This stunning lack of transparency is now commonplace in virtually all media channels. Online has made it possible for pretty much anybody to do a survey, whether they know or care about what they are doing or not. The web is awash in press releases with exciting findings from surveys, often with zero detail to help the reader understand whether those findings have any real meaning or are just cherry-picked from a bullshit survey. All of this is one more reason why the public has such low regard for surveys and why late night comedians can create an immediate giggle in their audience by saying, "There's a new survey out today . . ."

As it turns out, there is another survey on the same topic and with a similar finding:

According to the survey, 57 percent of consumers are willing to share additional personal information, such as their location, top five Facebook friends' names and information about family members, in return for financial rewards or better service, while 54 percent would even allow this data to be passed on to a third party, under the right conditions.

No details on the methodology used for this survey either. And I'm not going to jump to any conclusions just because the survey was released on the same day the sponsor announced a new suite of online data management and analytic products.

But I wonder whether the survey asked about throwing grandma in to get a better price?


Pew takes a serious look at Google Consumer Surveys

The room is full here at AAPOR and mostly I suspect to hear a presentation of Pew's comparison of the results from a dual frame (landline plus cell) telephone survey and Google Consumer Surveys. There is no shortage of people I've talked to here and elsewhere who think that Pew was overly kind in characterizing the differences. So it will be interesting to see how this plays out. Granted, it's back to keeping score, but I can't resist watching.

Scott Keeter is doing the presentation, and already I feel better. (I'm sure he didn't mean it as a joke, but Scott started by describing Google's quota sampling strategy as based on Google knowing "something about users.") More seriously, he is positioning this in a fit-for-purpose framework.

Scott has shown a chart that estimates the mean differences across 52 comparisons to be 6.5 points. Not awful, and not great. Some topics seem to work well, but others do not. The problem, of course, is that there is no way at the moment to figure out when it will work and when it will not.

He says that Pew will continue to use them, but not for point estimates. It seems useful for some question testing and quick feedback to help with survey design. Hence the link to fit-for-purpose. But hardly a game changer.


Representativiteit is dood, lang leve representativiteit!

I'm in Amsterdam where for the last two days I've attended an ESOMAR conference that began as a panels conference in 2005, morphed into an online conference in 2009 and became 3D (a reference to a broader set of themes for digital data collection) in 2011. This conference has a longstanding reputation for exploring the leading edge of research methods and this one has been no different. There have been some really interesting papers and I will try to comment on a few of them in the days ahead.

But as an overriding theme it seemed to me that mobile has elbowed its way to the front of the pack and, in the process, has become as much art as science. People are doing some very clever things with mobile, so clever that sometimes it takes on the character of technology for technology's sake. Occasionally it even becomes a solution in search of a problem. This is not necessarily a bad thing; experimentation is what moves us forward. But at some point we need to connect all of this back to the principles of sampling and the basic requirement of all research that it deliver some insight about a target population, however defined. Much of the so-called NewMR has come unmoored from that basic principle and the businesses that are our clients are arguing about whether they should be data driven at all or simply rely on "gut."

At that same time we've just seen this fascinating story unfold in the US elections that has been as much about data versus gut as Obama versus Romney. The polls have told a consistent story for months but there has been a steady chorus of "experts" who have dismissed them as biased or simply missing the real story of the election. An especially focused if downright silly framing of the argument by Washington Post columnist (and former George W. Bush advisor) Michael Gerson dismissed the application of science to predict electoral behavior of the US population as "trivial."

So today, regardless of their political preferences, researchers should take both pleasure and perhaps two lessons from the election results. The first is that we are at our best when we put the science we know to work for our clients and do them a major disservice when we let them believe that representivity is not important or magically achieved. Shiny new methods attract business but solid research is what retains it. The second is that while the election results were forecast by the application of scientific sampling, it was won with big data. The vaunted Obama ground game was as much about identifying who to get to the polls as it was about actually getting them there.