Scales Can Be Problematic in Mixed Mode Studies
Still More on Shirking

Multiple Response Questions on the Web

It seems almost second nature to us to randomize response options for multiple response questions in Web surveys.   Why do we do that?  Well, I hope the answer is that we know there are primacy effects, that is, Rs give greater attention to the items at the top of the response list than they do to items further down.  As a result, those items at the top are more likely to be selected.  So we randomize to give every option a reasonably equal probability of being at the top of the list.  Of course, Rs still tend away from "deep cognitive processing" of the entire list and, in all likelihood, Rs select fewer responses than they might if they seriously considered every response in the list.

We try to avoid this problem on the telephone by using a technique called "forced choice" in which we read every response in the list as a yes/no question.  Not only does this help with the primacy issue but it also results in more responses being selected.

It turns out that this forced choice technique works equally well on the Web. (See, for example, http://survey.sesrc.wsu.edu/dillman/papers/TSMII%20Check-All%20Forced-Choice%20Paper%20(3).pdf ) Instead of a multiple response or check all that apply format you put the responses into a grid with yes or no required on each one.  The technique works against primacy and typically results in a larger number of responses being selected.  It is superior to randomization and ought to be our standard.

It also helps with programming efficiency, especially when the answers to the multiple response question are use to drive future questions.  While it is relatively easy to do the initial randomization, retaining that order through future questions is very labor intensive.

Comments