Previous month:
May 2006
Next month:
July 2006

Posts from June 2006

A Perfect Storm

We may have a perfect storm brewing in the world of Web panels.  I've grabbed onto this often-used metaphor to describe the collision of three trends in MR, each of which is in direct opposition to the others. They are:

  • Increasing client focus on panel quality.  The MR consulting firm Cambiar has reported in its last two annual surveys of clients that the top concerns are panel quality and cooperation rates.  We are seeing that among some of our clients as well.
  • More MR firms deciding to buy rather than build.  Surveys of MR practitioners also by Cambiar report that MR firms are beginning to shy away from building and maintaining their own panels, instead choosing to rely more on third party suppliers such as Survey Sampling, Greenfield, and e-Rewards.  The implication here is that the panel business is brisk, and that is indeed the case despite Greenfield's woeful stock price.
  • Deteriorating panel quality  Panels are being over used and response rates are cratering.  This is a trend that we've observed in our own work going back to 2001.  Anecdotally we know that panel members often receive multiple survey invitations in the same day. Some panel members are doing lots and lots of surveys.  Studies have shown that there are panel effects, meaning that response quality deteriorates the longer people stay on panels and the more surveys they do.  Panel vendors do not deny that cooperation is a problem and so is keeping up with the demand for respondents, especially in tough to get demographic groups.

So what's the next phase?  I wish I knew.  On thing is clear and it's that the choice of a vendor is more important than ever.  It also seems clear that we need to understand panel response quality much better than we do and probably develop much better indicators of panel quality and respondent satisficing behavior.  Finally, don't expect the panel companies to solve this problem for us.  All signs are that they are going to ride their panel assets to exhaustion, or at least all the way to the bank.


More on Social Desireability

Last week I sat through a very large panel discussion at a CASRO meeting and heard a lot of industry people express their views on the pressing problem of declining respondent cooperation.   There was, of course, discussion about online as well as mixed-mode and there was no shortage of people prepared to argue that online is better because people don't always tell the truth to interviewers.  I came home to the latest issue of POQ in which there is an article reporting on something called "T-Acasi" which stands for Telephone Audio Computer-Assisted Self-Interviewing.  Among the authors is Charles Turner from the Research Triangle Insitute who was one of the early proponents of offline Acasi.  In this original approach in-person respondents listened to questions played through earphones from a laptop and took a CAPI interview without the interviewer.  In T-Acasi, the interview begins on the telephone and at one point in the interview the interviewer disappears and the interview becomes sel-administered via IVR.

The research record on self-administration whether by paper or Acasi has pretty clearly demonstrated that you get better answers to sensitive questions about things like drug use, sexual behavior, and abortion.  "Better" in this context means higher reports of these socially sensitive behaviors.  It's not at all suprising that these results replicate for T-Acasi.

The question for us, and the subtlety that escaped those CASRO panelists, is whether this finding translates to the kinds of questions we ask in MR such as satisfaction, propensity to purchase, or smoking.  The experiments we've done suggest that the effect, if any, on satisfcation surveys is pretty minor and manageable, but we may have a bigger problem with behavioral issues like smoking.  But the smoking issue has been clouded (no pun intended) by the fact that comparisons have involved RDD telephone vs. Web panel sample, so the sample bias is a major confounding factor.

The bottom line is that social desirabilty effects are real and we need to be alert to them, but they may not be as major or consistent across different behaviors or attitudes as some people might think.  The unfortunate answer here as in so many of these methodological issues is, "It depends."


Left Side Screen Design

In an earlier post (http://regbaker.typepad.com/regs_blog/2006/04/web_users_read_.html)I commented on some research by Jakob Neilsen showing that Web users tend to read the screen in a rough F-shaped pattern.  Shortly thereafter we got some pushback from a client about our standard of the Next button on the left and the Previous button on the right.  This client was not convinced by our white paper reporting on research that showed (1) a slight tendency to go back to a previous screen by mistake when the Previous button is on the left and (2) regardless of navigation button placement respondents seem to figure it out quickly.

Thinking about how best to respond to the client it occurred to me that the Neilsen research is still further support for a left side design.  Since the last thing we want is to force respondents to go searching for stuff on a questionnaire screen, placing the key navigation button on the left side seems like a good idea.  Now I've just seen another eye tracking study (Agnieszka Bojko, "Using Eye Tracking to Compare Web Page Designs: A Case Study," Journal of Usability Studies, vol 1, May 2006, pp. 112-120) that shows a similar focus by Web users on the left side of the screen.  More and more one realizes that there is an informal standard among Web site designers to put the important stuff on the left side and as a result that's where Web users instinctively look. (Of course, there also is the Western standard of reading left to right.) Since our goal in Web survey design is always to make the survey task as easy and intuitive as possible, we should be sure we follow mainstream Web design principles, and those principles would seem to drive us to put the important stuff on the left side of the screen.


Automated Telephone Surveys

I recently had someone ask me about the effectiveness of those automated surveys where the phone number is computer dialed and the respondent is immediately put into an IVR interview.  This is a pretty inexpensive methodology and the obvious question is whether these surveys are as "good" as a standard telephone interview with a live interviewer.

The companies who do these surveys claim that their results are every bit as good as those that use live interviewers.  (One of the loudest companies making this claim is www.surveyusa.com )  The real bread and butter for these guys is political polling and their main clients are media companies, especially TV stations.  They typically use RDD sample, a local on-air talking head to record the interview, dial away, and then apply standard demographic weighting techniques on the back end.  They will show you lots of data to indicate that this methodology generates survey estimates and response rates (well, maybe a little lower) that are comparable to what you get with a live interviewer.  So what's the downside?

  • IVR puts a lot of limitations on interview length and complexity.
  • Without an interviewer you have the classic problems of higher breakoff rates, respondent verification (you get whomever answers the phone), no ability to explain difficult questions, and (probably) higher missing data rates.
  • While using a on-air personality may improve the response rate there also may be unmeasured bias precisely because the individual is known to the respondent and he or she may have strong positive or negative feelings about "the interviewer."
  • It's not clear what kinds of response rates these surveys get outside of political polling or when an on-air personality is not the recorded interviewer.
  • Without getting into a lot of detail they don't supply, we just have to take their word on the response rates.

OK, so that's a nice reasoned approach and one that takes pretty much at face value what these companies say about what they do and the kind of results they get.  But what does your gut tell you?  Mine tells me that I'm not at all likely to do what of these things should I fall into their sample.  If you can't even go to the trouble to have a human being call me, why should I got to the trouble to answer your questions?  Call it social exchange theory or human nature, but I just don't see this methodology as a serious competitor to a well-designed and executed CATI survey except at the very low end.