This is something of an unplanned follow-up to my last post. While catching up on my reading I came across an interesting article in the winter issue of POQ by LinChiat Chang and Jon Krosnick reporting on a neat little study that compares responses from a telephone RDD sample, an online panel recruited by RDD (Knowledge Networks), and an online volunteer panel (Harris Interactive). One key finding is in sync with something I have heard talked about in connection with the ARF study: online panel respondents do a good job of completing questionnaires and the more they do the better they get.
As a broad summary, in the Chang and Krosnick study the online panel results showed less satisifcing, less social desirability, better self-reports, and greater internal consistency than did the results from the RDD telephone sample. The online panel folks were just better at doing surveys. Some of that was attributed to more practice, but some of it also was due to a stronger tendency for the volunteer panel respondents to select only those studies where they had a strong interest in the survey topic. The telephone results, on the other hand, were more representative demographically and in terms of electoral participation. These differences persisted even after weighting.
A cynic might say, "Pick your poison." An optimist might say, "Good input to fit-for-purpose methodology choices." I might wonder whether we've used up a lot of time, money, and stomach lining worrying about the wrong problem.