Previous month:
August 2005
Next month:
October 2005

Posts from September 2005

How Many . . ?

One of Don Dillman's Principles for Writing Survey Questionnaires says: "Avoid specificity that exceeds the respondent's potential for having an accurate, ready-made answer."  It's a good principle but hard to follow because clients often want to know how many of this or how many of that.  So we ask it anyway and instead of getting an accurate number from every respondent we get an estimate, an educated guess.  One obvious indicator that we are getting an estimate rather than an accurate number is "heaping," that is, lots of numbers that end in a five or a zero.  No one ever seems to answer "13" but we have lots of people answering "10" and "15."

In The Psychology of Survey Response Tourangeau, Rips, and Rasinski describe a technique they call "Decomposition" that can be useful in those instances where a respondent might not have a "ready-made answer."  They suggest that in place of the big "How many" question you ask about smaller subcategories that eventually total up to the big one.  Example:  Rather than ask, "How many sporting events did you attend last year?" you might ask "How many baseball games? How many football games? How many basketball games?" etc. 

Too often we do this in the opposite direction, and that can lead to inaccurate data.  Example:

Q1.  How many patients have you treated in the last six months with condition x?

Q2.  Thinking about the patients you've treated with condition x in the last six months, how many did you treat with each of the following treatments?  (Total must sum to the number of patients reported in Q1)

The theory behind Decomposition would suggest that you get a better count of patients from Q2 than you do from Q1, and indeed questions like Q1 almost always show lots of heaping.  To make matters worse, we insist that the respondent juggle his numbers in Q2 to match Q1, even though the total in Q2 may be more accurate.

Decomposition is a technique worth considering in questionnaire design, especially if you believe the respondent can be helped to a more accurate answer by stimulating his or her recall in this way.  Some respondents may take themselves through this kind of exercise in their head to get to a number without our help, but sadly not every respondent is that well motivated.


Longer Field Periods Add Value

Dan Zahs pointed me to this online article http://www.surveysampling.com/frame.jsp?ID=theframe/2005/September3.html from one of our principal sample providers, SSI.  The general point is this:  the longer you stay in the field with a survey, regardless of mode, the better representation you get among respondents.  There are a whole lot of factors that determine when a sampled person is most likely to respond to a survey request.  In addition to general receptivity to surveys there are things like attitude toward the survey topic, lifestyle, and basic demographics.  A couple of concrete examples:

  • On customer sat surveys we sometimes see respondents with very positive attitudes toward the surveyed company responding earlier than those with less positive attitudes.  So as the field period moves along, satisfaction scores may decline.
  • On phone surveys where we do few callbacks we often see a bias toward people who are likely to be at home and relatively easy to reach (e.g.,the elderly and larger households) and against people less likely to be at home and more difficult to reach (e.g.,younger people and one or two person households).

This is terribly important because a fundamental assumption on every survey we do is that the sampled persons who did not respond are no different attitudinally or behaviorally from those who did.  Put another way, for our results to be valid we must be sure that they would not change substantially if we interviewed everyone in the sample instead of just those who cooperated.

A number of factors work against long field periods including cost, client need for fast turnaround, and, in transaction surveys, the need to limit recall periods.  With the Web especially we can turn surveys very quickly and clients appreciate that.  But fast turnaround has its downside and we need to be alert to any potential bias it might introduce.


"How Does Ranking Rate?"

The issue of ranking questions keeps coming up.  In general, survey methodologists seem to favor rating style questions over rankings, especially if the number of items to be ranked or rated is large.  Dillman says it best when he writes ". . .avoid asking respondents to rank large numbers of items from top to bottom.  Such a task is quite difficult for most respondents to accomplish . . ."  Beyond the difficulty for respondents (as if we needed another reason) rankings have the inherent weakness of not expressing the magnitude of differences in respondent preferences.  In other words, we don't know if the R prefers #1 over #2 by a lot or just a little.  Hence the preference to ask the R to rate the items individually rather than put them in an order of preference.

Randy Thomas at Harris Interactive (and thy guy from whom I stole the title for this post) did an interesting experiment in which he tested rankings vs. ratings in a Web study.  I have this paper in hard copy and can make a copy for anyone who might be interested.  I'll simplify the design by describing it as assigning one set of Rs to rank a set of items and another set to rate them, even though it's a lot more complicated than that.  Some got five items and some got 10.  He tracked the time it took to do the task and asked a couple of follow-up questions of Rs about difficulty.  The key results:

  • There were no real differences in outcomes, that is, ratings and rankings produced essentially the same results.
  • The ranking tasks took longer, Rs reported them to be more difficult, and Rs perceived them as being less accurate.
  • Rs assigned to the 10 item ranking condition of the experiment found them more difficult and the outcomes were less comparable to ratings than those assigned to the five item condition.

So the take way on this would seem to be to avoid ranking questions and use ratings instead on the grounds that rankings don't provide us with significantly better data but they make the Rs task harder, never a good thing.  In addition, ranking questions get increasingly more difficult as the number of items to be ranked increases.

So if a client comes to you with a questionnaire draft that has ranking questions in it I would suggest the following:

  • Try to convince him or her to use rating questions instead.
  • If your client insists on rankings and the number of items to be ranked is large (i.e., more than five) then try to convince him or her to ask the R to choose only his or her top three.  Trying to rank eight or 10 items without ties is really difficult for an R, and the programming effort to check for ties and help the R correct errors is significant.
  • From a programming perspective, the best approach is to have a series of questions in which the R selects his or her first choice and then gets the list back again with the first choice removed and is asked to make a second choice.

It also goes without saying that we should never use ranking questions in an aural mode such as a telephone survey. They are tough enough when you can see the items to be ranked.  Doing all of that in your head based on what you heard is more than most can manage.


The Web Survey Participation Process

A recent article in Journal of Official Statistics offers a useful way for us to think about the respondent decision making process.  The authors, some of whom are involved in  websm.org, describe four stages in the participation process: (1) initial contact by e-mail or snail mail; (2) initial accessing of the questionnaire by login or clickthrough; (3) starting the survey by clicking to the first question; and (4) completing the full survey.  The authors' goal is to describe a survival model that depicts how the original sample winnows itself down to the hopefully-not-too-small proportion of sample members who eventually complete.   

What I find interesting is the simple setting out of the process with the obvious implication that we should look closely at each stage and do our best to optimize it so that we get as high a response rate as we can manage. The stage at which we/MSI almost always lose the greatest number of people?  Stage 1.  Once we get them to the site we do pretty well, but convincing respondents to participate is a complex difficult task.  More on that another time.

You can find the article online here:  http://www.jos.nu/Articles/abstract.asp?article=203451.