Trying Hard to Engage
Offshoring

How Much Does It Hurt?

Talk of visual analog scales (VAS) was everywhere at GOR08.  These are scales that measure a characteristic or attitude across a continuum without the use of labels, numbers, or other markers, except for endpoints.  Their classic use has been in pain measurement as in the example below:

 

In online research they typically are presented as slider bars that allow the respondent to move the marker continuously across the scale as in the figure below:

 

We have done a number of experiments comparing them to more traditional answering methods (such as radio button scales) and in general have found that they yield essentially the same results as those conventional methods. (See, for example, Couper, M.P., Singer, E., Tourangeau, R., and Conrad, F.G. (2006), "Evaluating the Effectiveness of Visual Analog Scales: A Web Experiment." Social Science Computer Review, 24 (2): 227-245.) But we also have found some downsides, those being that they take longer for respondents to answer, cause more respondents to terminate because of technical problems, and produce more missing data.  Other studies, such as some done at Harris Interactive, have produced similar results.  So we have generally advised against their use.

Despite previous research they continued to be very popular.  See this post, for example.  Researchers are attracted to them because they are cool, because they believe that respondents like them, and because they think they can help make an otherwise tedious survey more interesting.  Some, but not many, argue that they are simply a better way to measure certain kinds of things like attitudes.

By my count there were at least six papers at GOR08 plus one by Mick Couper, Bob Rayner, Dan Hartman and yours truly that touched on VAS in one way or another.  The newest research seems to be saying three things:

  1. The technical problems causing higher termination rates may not be a severe as they once were and so we are losing fewer respondents than in the past.
  2. The gap in response time also is narrowing.  Perhaps respondents are becoming more familiar with them or the newer implementations simply work better.  Some researchers also are arguing that longer response time may not be a bad thing if that time is being spent on cognitive processing rather than trying to figure out how to use the thing.
  3. The data continue to be comparable to other methods such as radio button scales.
  4. The evidence on respondent preferences for VAS or RBS is still not convincing.

I suspect we are going to see more and more use of VAS in online surveys for the simple reason that people think they are cool.  Fortunately, while there is not much in the methods literature to suggest that they improve measurement or the survey experience it at least seems that we are getting to the point where they do no serious harm.  That may be a low bar, but it's better than some of what we have seen online.

 

Comments