Next month:
September 2005

Posts from August 2005

Randomization

This issue came up in DP the other day when a quex came through with a question design that was fairly difficult to program.  It started with a question that had a long list of potential responses (8 in all) and rather than asking it as a ranking question (good design choice!) the first question asked the R to make his first choice and then he got the same set back in the next question but with his first choice removed and asked to choose his second choice.  The tricky part was that the design specified a randomized list in the first question but then asked that the same order be maintained for the second.  The programming to accomplish this is not straightforward; the software would rather randomize the set of remaining responses again.  So the question to me was whether there were methodological reasons to put in the effort to maintain the same order from the first to the second question.  My answer was, "No."  Here's why.

We randomize answers sets when we are concerned that Rs may not read all of the answers to a question before they make their selection.  The case at hand was a Web survey, and the methods literature tells us that in a visual mode like Web or mail Rs are somewhat more likely to choose from the top of a list of answers than from further down the list, especially if the list is long or has no apparent order to it.  This is called primacy.  In an aural mode, like telephone, the tendency is more to choose from among the last answers heard.  This is called recency.  We randomize the answer order to correct for this because it ensures that across all Rs every answer has an opportunity to appear at the top, in the middle, and at the bottom of the list.  Put another way, it ensures that all answers have an equal chance to be selected across the entire survey.

Now to apply this principle to the case at hand.  Let's assume the R gets to number five and by then he knows his first choice is number three.  He stops there and makes his selection.  On the next screen when he sees the same order he may read the remaining three answers he didn't read last time, or he already may know that number five is his second choice because he debated between it and number three on the last question.  So he never bothered to consider the last three answers in the list.  The better design is to randomize the answer set in the second question as well so that it mazimizes the liklihood that all answers are read. And luckily in this instance, it's also the easiest thing for the interviewing software.

All of the above is grounded in the sad reality that Rs don't consistently give their full cognitive energy to every question in every survey.  Sad, I know.  While we can't do much about that on an individual R basis, we can at least use techniques like randomization to minimize its impact across all Rs.

We sometimes randomize questions as well, but that's another issue I'll save for another time.


Everything About Web Surveys

Bookmark it: http://www.websm.org/ .  This is a huge project originally begun at the University of Slovenia and now by a small consortium of European survey groups with the mission to "provide all target groups (students, professionals, users from academic, public and business sector) with information related to Web survey methodology and to the impact of new technologies on survey process."  It is an amazing resource, so comprehensive that it can be difficult sometimes to find what you want, but impressive nonetheless.


Prenotification Works!

My friends who make their living doing survey methodology have taught me that the answer to almost every survey methodology question is: "It depends."  Having just read how a prenote and $5.00 incentive didn't do much for the response rate on UM's Survey of Consumer Attitudes (SCA) I have come across an article in the Summer 2005 edition of POQ that shows an advance letter on a phone study creating about a 5% bump in response rate and an advance postcard about a 3% bump.  The authors also conclude that the cost of the prenote efforts were less than the cost of the additional dialing.

The obvious question: why does it work here and not for the SCA?  Well, that's not easy to answer.  On its face, this study (done at MSU) seems to have been as rigorous at the SCA (for example, up to 15 call attempts).  But the response rate on the MSU study was only around 25%, roughly half of what the SCA gets.

One hypothesis might be that the UM folks are better at this sort of study than the Sparties.  The UM folks get people with a phone call where the Sparties need an advance communication of some sort.  But that's just a guess.

All that said, my takeaway on this is that a prenote is worth considering if you're concerned about response rate.  I'd certainly recommend some tests of our own.  But don't ask me to guarantee that it will work because my answer will be, "It depends."


Response Rate Decline

We all believe that telephone response rates have declined sharply over the last decade but documenting that decline with real numbers has been a bit elusive.  For a number of years CMOR did a cooperation rate study but, to be blunt, it was poorly designed and not all that useful beyond the general conclusion that things kept getting worse.

Now along comes a study by Curtin, Presser, and Singer (POQ, 2005, vol 69, no. 1) that looks at changes in the response rate to UM's long-running Survey of Consumer Attitudes.  Their research updates an earlier study of the period from 1979 to 1996 and also looks at the more recent period from 1997 to 2003.  Their results show that the decline was steeper than the original study had shown and that things have gotten worse since.  To wit:

  • Between 1979 and 1996 the response rate fell from a high of 72% to a low of 60% (about .75% per year).
  • Since 1996 the response rate has fallen at about twice that rate (about 1.5% per year) down to a 2003 response rate of 48%.
  • The main cause of nonresponse in the earlier period was refusals, with noncontacts only a small percentage.  Since 1996, noncontacts have emerged as the principal source of nonresponse.  In other words, people have stopped answering their phones.
  • Even with the introduction of some tried and true response improvement techniques (advance letters and $5.00 incentive) over the last few years they have been unable to improve the response rate.

Bear in mind that this is a pretty rigorous phone study.  The number of dial attempts has been mostly limited only by the field period.  They were calling up to 12 times before treating a number as non-working but that has been reduced to six calls.  They try to convert every initial refusal and, as noted above, use classic response improvement techniques.

Now to put this in the context of what we do at MSI.  For starters, most of our work uses a much less rigorous design for two reasons:  (1) it costs more than most commercial clients want to pay and (2) most of our clients place greater value on getting data sooner as opposed to waiting to get a higher response rate.  We use more rigorous designs for academic work and have in the past reached the kind of response rates that are reported above.  Probably the most recent examples was the 2000 National Election Study where with considerable effort we got close to 55%.

All in all, it just keeps getting tougher.