Previous month:
August 2012
Next month:
November 2012

Posts from October 2012

The US election and the #NewMR

One of the most interesting side stories of the US election (at least for a survey geek) is the inconsistency in the polls. In part that's due to there being so many of them and with that proliferation has come what we might call some thinning out of polling expertise. But when a venerable polling firm like Gallup raises eyebrows with some numbers that seem outside the mainstream of the other polls you really wonder what's going on. For example, before Gallup suspended their daily tracking poll on Monday because of the east coast weather disaster they had Romney leading Obama 51% to 46% among likely voters.

Enter two favs of the NewMR: predictive analytics and predictive markets.

The hands-down champ in the predictive analytics crowd is Nate Silver and his Fivethirtyeight.com blog. Nate's getting a ton of criticism right now because the right wing doesn't like what his model says is going to happen. Or maybe it's just because he works at the New York Times which makes US conservatives see red (well, blue actually). The interesting thing is that Nate's model is saying pretty much what other models are saying and what they're saying is not far off from what the predictive markets are saying. The Washington Post's Ezra Klein sums it up nicely:

As of this writing, Silver thinks Obama has a 75 percent chance of winning the election. That might seem a bit high, but note that the BetFair markets give him a 67.8 percent chance, the InTrade markets give him a 61.7 percent chance and the Iowa Electronic Markets give him a 61.8 percent chance. And we know from past research that political betting markets are biased toward believing elections are more volatile in their final weeks than they actually are. So Silver's estimate doesn't sound so off. . . Silver's model is currently estimating that Obama will win 295 electoral votes. That's eight fewer than predicted by Sam Wang's state polling meta-analysis and 37 fewer than Drew Linzer's Votamatic.

Now granted, the principal inputs to all of these approaches, including the betting being done in predictive markets, is polling data. Lots and lots of polling data with adjustments for past accuracy, known biases toward one party or the other and, in the case of the markets, individual guesses about what will happen between now and next Tuesday.

This has been a crazy election season and my nerves are pretty frayed right now, but I am cheered by a certain delicious irony. Another unnamed MR blogger who preaches the predictive analytics gospel here, there and everywhere desperately wants those good old fashioned telephone surveys to be right and for the predictive analytics to come up empty. In my case, I'm willing to take a methodological hit for the greater good of my country.


Norman doors and online questionnaire design

In usability circles the doors in the pictures on the right are called “Norman doors.”    Their named for Donald Norman who wrote a fascinating little book  calleNelson Doord, The Design of Everyday Things. In the book Norman talks a great deal about affordances, that is, properties of objects that help us to do things with them.  Like open doors.  But the book is also about user-centered design, that is, the design of affordances in ways that make it easy for people to use them.  In Norman’s world, things are easiest when we already know how to use them, hardest when we have to read instructions to figure it out.  This distinction he calls “knowledge in the head” versus “knowledge in the world.” Entrance_yogurt_guru

To get back to those doors, often when we see handles like the ones in the pictures our instinct is to pull them. After all, that’s what handles are for.  That’s knowledge in the head at work.  The knowledge in the world says “Push,” but as often as not we don’t see that until the door doesn’t work like we expect.  If this has never happened to you go find a Norman door (they’re everywhere), sit down and watch people as they go in and out.

I sometime think of Norman and his doors when I try to do a survey that has slider bars, drag and drops and other forms of what their designers like to call “engaging designs.”  It seems to me that I spend as much time trying to figure out how to use some of the gadgets as I do coming up with answers to the questions. Worse yet, I see questionnaires that use different gadgets for the same question type. The survey becomes a puzzle to be solved. Yes, a game.  But it’s a game that sucks cognitive energy away from the real survey task, which is thinking about the subject matter and answering the questions.  I pull on that Norman door because I’m not thinking about how to open the door.  I’m thinking about something else. I instinctively pull the handle because that’s what handles are for. That is until the door doesn’t open.  Then my train of thought is interrupted while I solve the problem of how to open the door.  That's what happens when we encounter unfamiliar affordances.

The thing about those boring radio buttons and text boxes in surveys is that we all know how to use them because they are so ubiquitous on the Web.  For some of us even clicking on words like Comment, Reply or Follow falls in the same category.  It’s second nature to us.  Knowledge in the head at work.  But when is the last time you bought something online that required a slider bar?  We are used to clicking our state with a drop down and entering dates by popping up little calendars and clicking on the day, but how often do we drag something into a shopping cart?  Or sort things?  These devices rely on knowledge in the world.  We should not be surprised that “engaging designs” often produce different answers and take longer.

If we are going to have better online questionnaires we need to be sure to focus on the right things.  Make questionnaires easier, not harder.


The death of MR postponed

Twice in the same week I have seen a version of this quote about Big Data: "Data is about 'how many' and 'how much', whereas research is about 'why' – you have to join it all up." First from this very thoughtful ESOMAR paper by Pieter-Paul Verheggen and Wim von Slooten and now from Clive Humby, someone who should know. Right?

So I suggest we accept this as true, at least for a while.


#Twittersurvey: You knew this was coming

It’s been a big week for Twitter.  First came the announcement of the Nielsen deal and this item that seems to say Coke is embracing Twitter surveys in a big way.  Expect a stampede to follow.

This was inevitable but the timing is curious.  Pew tells us that as of May 15% of US adult Internet user say they use Twitter and just about half of those do so every day.  And those folks are disproportionately Afro-American, under 15 and live in urban areas.

You could look at this as one more reason for MR to work the worry beads, or you could see it as an opportunity.  What me worryAfter all, we’re supposed to be good at sampling, right?  And the basic principles of sampling are really useful for looking at a dataset, understanding its biases, and explaining who the data represent, what it means for the client’s target market, and therefore what actions the client should take. 

 But alas, the last 15 years of online research  has demonstrated pretty clearly that we don’t understand sampling much at all.  If we did, we would have recognized panels for what they are and either labeled the work appropriately or developed the techniques to overcome its shortcomings.  The later issue has finally moved to the top of the agenda for some, but sadly not for all.

So here we go again, ready or not.  Twitter, Facebook, Google, Mobile, Big Data – we are going to have to deal with all of it.  Will we dig into all of it in a systematic way to figure out what’s really there and what it can tell us or we will just accept it all at face value?  I’d like to think that one way for us to morph as these new data streams go mainstream is by leveraging our experience in research design and especially in the basic principles of sampling, not so we become samplers (God forbid) but so that we can evaluate the validity of the data in front of us.

It's hard to feel encouraged.  And if one more person tells me that it’s going to be ok because of the law of large numbers I surely will scream.