The empty threat of a major blizzard (forecast of up to 15" of snow with an actual snowfall of 5") here in Ann Arbor presented a nice excuse to stay home and catch up on some reading. The December issue of the Journal of Official Statistics was on the list and I was happy to see an article by friends and colleagues from the University of Michigan based on some research for which my company did the data collection. If you follow the survey methods literature then you know that the team of Tourangeau, Couper, and Conrad along with their graduate students has produced a large volume of very interesting stuff under the general heading of "Visual and Interactive Features of Web Surveys." (I here disclose that I have been PI on some of the grants funding this research.) The specific topic of the JOS piece is the use of definitions and clarification features in Web surveys. It's not uncommon, at least in our work, to define some terms for a respondent, say, prior to a DCM exercise, and then later in the exercise itself provide links back to those definitions for respondents who may have forgotten what something means. That's not the exact application here but the concept is the same. The question the research asks is: if we provide potentially useful information to respondents via hyperlinks will they use them? The answer seems to be mostly "no." Providing the information via mouseovers is better but nothing beats putting the definition on the screen with the question that uses the term.
Of course, that's not always possible. In copy tests, for example, you may display the copy but then want to give respondents the chance to view the copy on demand as you ask questions about it. Still, there are a couple of important notes of caution in here. First, what may seem like a cool feature to a Web questionnaire designer may not be anywhere near as cool to a Web respondent. This is a lesson we seem to need to learn over and over. Second, when we incorporate these kinds of features into questionnaires we should not assume that respondents always use them and therefore are giving us good, well-formed answers. In multiple iterations of this experiment respondents generally gave more accurate answers when they used the definitions, less accurate when they ignored them.