Our colleagues at UM for whom we have executed a number of methodological studies have just published a piece on the use of slider bars or, as they call them "visual analog scales," reporting on some work we did last year (Couper, Tourangeau, Conrad, and Singer, "Evaluating the Effectiveness of Visual Analog Scales: A Web Experiment," Social Science Computer Review, 2006, 24, 2, 227-245). In the experiment we randomly assigned respondents across three conditions: standard radio button scale displays, numeric input box, and a slider bar that could be positioned with the mouse (shown on the left). A number of advantages were hypothesized for the slider bars beyond the obvious of them being just very very cool. But as often is the case with these kinds of gadgets the experiment demonstrated that slider bars do not yield different results from standard radio buttons, but they create a couple of additional problems. First, it took respondents longer to answer the questions with slider bars than in the other two treatments. This might mean that respondents are being more thoughtful in their answers, or, more likely, that they they are experiencing some delay in download and/or struggling with using the gadget. Second, we saw more missing data and some breakoffs at that point in the survey. This probably is a sign of technical difficulties. The slider bars require that Java be enabled on the respondent's browser, something that more than 90 percent of Internet users usually have by default. We tested for that and tried to assign those respondents to another treatment. But Java applications can sometimes misbehave for other reasons and that seems to have happened in more instances than we expected.
This is not he first time that I have seen results like this. The Web opens the door to some very cool interfaces (see, for example, http://www.greenfield.com/programmingservices.htm ), but often when I see one evaluated closely the results are the same: no difference in answers from traditional survey interfaces, longer completion times, and more respondents lost because of technical problems. Gadgets probably have lots of client appeal, but they may not deliver what they promise and respondents may find them more problematic than traditional interfaces. One clear exception has been constant sum questions or tallies. We did an experiment on those as well, and while that work has yet to be published we found that they seem very helpful to respondents without the burden we see with other gadgets.