Previous month:
October 2006
Next month:
December 2006

Posts from November 2006

Are Sweepstakes Effective as Incentives for Web Surveys?

Several years back it was very fashionable to incent Web survey respondents by offering either cash prizes in a sweepstakes or, in B2B studies, executive toys like color printers and PDAs.   These were thought to be effective and were in fact much less expensive than per-complete cash incentives.   Some online panels continue to work on this basis, awarding panelists one or more "tickets" to a monthly drawing in return for completing a survey.  The question is: how well does this work?

The consensus from the mail survey literature seems to be that sweepstakes have only a very modest impact on response rate.  In other words, they tend to be marginally better than no incentive at all.  Former MSI employee Scott Crawford did a controlled experiment some years back on a student survey and found that a sweepstakes added about seven points to the response rate, both compared to no incentive and when added in addition to a $2 prepaid incentive.  Whether the difference between, say, a 35 Percent response rate (what Scott got with the $2 prepay) and 42 percent (what Scott got with the $2 prepay plus a sweepstakes), is meaningful I leave to you to decide.

Now comes an article in the current issue of Social Science Computer Review by Anja Goritz that reports on an experiment with online panelists in Germany.  The study looked at response rate and retention rate (that is, fully completed the survey) across treatments that varied no incentive and a cash sweepstakes incentive.  The cash sweepstakes was further split by one large prize and a number of smaller prizes.  The key finding: no difference in response or retention between no incentive and either form of sweepstakes.

My take on all of this is that sweepstakes may have some impact on response with certain kinds of populations--like students--but mostly they are only slightly better than no incentive at all.  And the longer and harder the survey, the less effective they are likely to be.  They also may work less well with panels than with one-time respondents.  Panelists learn what the odds really mean, and when they don't win after several tries they may get discouraged.   But that's just my opinion.