Previous month:
July 2006
Next month:
October 2006

Posts from August 2006

More on Primacy in Web Surveys

In previous posts I've talked about the tendency for respondents in self-administered surveys like Web and mail to choose answers from the top of the answer list, especially if the list is long.  The implication is that respondents are not reading the full list, or at least are putting more cognitive energy into items at the top and less for those further down.

Now one of our colleagues in the Joint Survey Methods Program (Mirta Galesic) has written a paper reporting on the results of an eye-tracking study designed to help us understand how respondents process Web questionnaire pages.  The results are pretty startling.  In one experiment, Miirta showed half the respondents a single choice item with 12 possible answers and the other half saw the same question but with the answers in reverse order.  We already had done the experiment online and found that respondents more often chose from the top half of the list , regardless of the order.  In the qualitative phase Mirta used eye-tracking technology to get a clearer view of how respondents actually read the questions.    The two images below are gaze charts that show where respondents looked and for how long.  The warmer the color, the longer they looked.  While some respondents read the entire list, it's obvious that the top half got the bulk of the attention and hence more answers were selected there.

Snap1_3 Snap2

So what to do?  The traditional fix here is to randomize the answers, and while that will make your distribution look better it doesn't solve the basic problem of respondents not fully considering the full set of answers and then accurately reporting their opinion or behavior.  The best solution would seem to be to stay clear of long lists.

One potential piece of good news here: There is some indication that this problem may not be quite so severe when the answers are on a scale.  It seems that respondents infer the scale order after looking at the first few answers and then "jump" to an answer.  But more research is needed to get a clearer fix on what might be going on.


"Blinded" vs. Sponsor-Identified Studies

It is a generally held principle in survey research that identification of the survey sponsor improves response rates. We know, for example, that government sponsored studies elicit higher response rates, partly because respondents may think they are required to respond or simply because the experience may make them feel patriotic. Surveys sponsored by academic institutions also do well, although not as well as government surveys. Surveys conducted by commercial organizations are the toughest, in part because they raise fears of selling under the guise of a survey, but also because commercial organizations identify their sponsors less frequently. In one notable example, Witt and Bernstein (1992) report achieving response rates as high as 70 percent on disk-by-mail surveys of MIS managers when sponsorship by a high profile client (Intel) was disclosed. MSI's customer sat surveys, which almost always identify the survey sponsor, routinely achieve higher response rates than other surveys. We should not lose sight of the fact that disclosure of the survey sponsor can add import to the survey and thereby improve the response rate.

We know less about the potential impact of sponsor identification on response bias. In telephone studies respondents will sometimes view the interviewer as an extension of the sponsor and respond more positively. A recent study by Steiger, Keil, and Gaertner (2005) found a tendency for telephone respondents to answer more positively than Web respondents when questions asked the respondent to compare the sponsoring company to competitors. But in general the potential response bias associated with identifying a survey's sponsor is probably not so great as to forgo the potential positive impact on respondent cooperation when identifying the survey's sponsor.