Previous month:
October 2008
Next month:
December 2008

Posts from November 2008

Lying about satisfaction?

Back in September I described a WSJ piece that reported on a set of findings from Harris Interactive suggesting that  social desirability operates more widely than perhaps I had thought.  Nonetheless, I was not convinced that it was an especially significant concern for customer satisfaction surveys.  Turns out, I might be wrong about that

We are working on a proposal in which we are looking at the possible impacts of transitioning a customer sat study from telephone to IVR.  While doing my due diligence on this I found a 2002 POQ article (Roger Tourangeau, Darby Miller Steiger, and David Wilson (2002), "Evaluating IVR," POQ, 66, 265-278.)  In a set of well designed experiments they found that telephone interviewing consistently produced higher sat scores than IVR.  While the differences were not major (less than a point on mean scores for a 10 point scale)and not always significant they were very stable across questions and different length scales.

The obvious question (at least for me) is how this might translate to Web where for years we have seen major differences in sat scores when compared to phone.  Of course, with Web we have a second variable, namely seeing the scale displayed rather than having it read.  There is some interesting research there as well, but I'll save that for another post.


Sometimes helping hurts

One of the guiding principles of Web questionnaire design should be to constantly look for ways to make it easier for people to answer our questions. We have just moved into the analysis stage of our latest set of experiments with the good folks at ISR. One of the experiments on this round looked at whether it was helpful to break up long answer lists into categories and label them.

One of the questions we tested was educational achievement. Here is a screenshot of the standard question.

We wondered if we could make it easier for people if we put the list into categories or columns. We tested a couple of formats that grouped the answers under headings. Two examples are shown below.

Our results demonstrated pretty convincingly that the use of headings made it harder not easier for people to answer. It took respondents almost twice as long to answer the question when we used headings, in part because many people thought they had to select an answer in each group. One in five respondents tried to select multiple responses when we used headings compared to just 3 percent in the standard single column presentation.

To my mind this reaffirms a key principle we should ruthlessly employ as we try to ease respondent burden and increase engagement: never assume how people will react to a design feature and always make design decisions based on data.


The votes are in!

In a previous post I reported on a comparison done by the folks at FiveThirtyEight.com showing the projected margin of victory for Obama just prior to the election by whether the polling included cell phones. The table below compares the final poll averages with the actual outcome. ("RCP Average refers to the average of all polls tracked by RealClearPolitics.com.

Poll

Obama

McCain

Spread

Final Results

52.6

46.1

6.5

RCP Average

52.1

44.5

7.6

No cell phones

51.7

45.0

6.7

With cell phones

52.4

43.8

8.6

 

The most striking thing to me is how close the averages are if we think in terms of margin of error. And while the Obama projection for the cell phone group is closer to the final result the differences are so small that it's probably unwise to make much of them. The McCain numbers show a greater spread, but that seems to be due to more undecideds among the cell phone group.

Looking at individual poll results in the table below it seems to me to be hard to argue that the polls calling cell phones (highlighted in yellow in the table) performed better or worse than those that did not, at least in terms of projecting the Obama vote.

Poll

Obama

McCain

Spread

Gallup

55

44

11

Reuters/C-SPAN/Zogby

54

43

11

ABC News/Wash Post

53

44

9

CNN/Opinion Research

53

46

7

Pew Research

52

46

6

Marist

52

43

9

IBD/TIPP

52

44

8

Rasmussen Reports

52

46

6

Battleground (Lake)*

52

47

5

CBS News

51

42

9

NBC News/Wall St. Jrnl

51

43

8

Diageo/Hotline

50

45

5

FOX News

50

43

7

Battleground (Tarrance)*

50

48

2

 

Of course, these are all "likely voters" which means that adjustments made by these companies vary dramatically. And the cynic in me is quick to point out that these election polls have a tendency to converge into a very narrow range just before the election. Imagine that! All in all, though, I don't see a convincing argument in these data either for including cell phones or continuing to ignore them.


Cell phones and election polls

In earlier post I referenced a Pew survey that suggested the impact of excluding cell phones from telephone samples has a negligible effect on estimates, at least for now.  But I've just seen an interesting post on FiveThirtyEight.com that would seem to call that into question.  They have arrayed the Obama margin of all of the major polls on this chart below.

2998653416_fda4026a6b_o

The yellow bars indicate polls that include cell phones and the story would seem to fit with what we would expect,namely, that the younger demographic that generally favors Obama gets under represented when cell phones are excluded from the sample.  One poll not in the list that many have pointed to in recent days as evidence that things are tightening is Maxon Dixon and it, too, excludes cell phones.

It's going to be very interesting to see how all of this gets worked out once the real poll (a.k.a. election) results are in.