Last night I had dinner with an old friend who also is a world renowned and widely published expert on questionnaire design. We chatted some about what's happening in MR and I asked for his take on the industry's obsession with speeders, i.e., respondents we decide have answered questions too quickly. Or, more specifically, does a longer response time signal a more thoughtful and better answer? His first reaction was to suggest a U-shaped curve with the sweet spot in the middle. A really fast response might indicate no thought at all while a longer lag in responding may signal some problem in coming up with what the respondent thinks is an appropriate response. Maybe the question is confusing or the first answer the respondent comes up with doesn't fit the response options or the respondent just feels like his answer is not good enough, that it needs work before reporting it. There also might be differences between questions that ask about attitudes where we want a top of mind response versus questions about behavior where more searching of memory might ultimately come up with a more accurate answer.
As we talked I was reminded of how many of the metrics we've come to use in online surveys as measures of response quality are crude at best. We are quick to delete respondents who answer "too quickly" or straightline when there may well be circumstances under which those are perfectly reasonable response behaviors. But few people ever talk about getting rid of slow responders even though the quality of their answers may be as bad or worse than those who answer quickly. I think this points to our tendency to assess quality by using those things that are easily measured. We can count grids, measure straightlining, compute completion times, etc. but we don't have much of a clue about how to measure the impact of poorly formed questions, answer categories that don't resonate or questions that ask about things respondents just don't care about. My friend does his share of litigation consulting and sometimes he can convince a client to go through a systematic questionnaire development process that includes focus groups, cognitive interviews, and a pretest. He finds that when they do this the results they get with online panels are much better than when they just sort of wing it. But this sort of systematic questionnaire development process is not used as often as it should be. To our detriment for sure, but for respondents even more.