Previous month:
September 2005
Next month:
November 2005

Posts from October 2005

Mode Effects: Part Two

Arguably the biggest difference between Web surveys and phone surveys is the absence of an interviewer. (There's also that visual vs. aural stuff, but more on that in another post.)  And when people see evidence of Web studies producing different results than phone surveys they often jump to "social desirability" as an explanation.  The theory is that we all want to present ourselves in as favorable a light as possible when we interact with other people, even survey interviewers.  So, for example, when an interviewer asks, "Do you smoke cigarettes?" we might be inclined to answer "No" because we don't want the interviewer to know that we have so little will power that we cannot give up a habit that in all likelihood will end our lives prematurely.  We answer "No" even though we smoke a couple of packs a day.  But when the question is on a computer screen with no one there to judge us we might be more inclined to fess up. Or so the theory goes.

In truth, there is lots of research to show that self-administered surveys yield higher estimates of things like drug use, abortion, extramarital sex, and alcohol consumption than do interviewer-administered surveys.  It seems pretty clear that on socially sensitive issues, social desirability is at work.  But on questions about satisfaction with your electric company?  Or with your insurance agent?  While we might see differences between Web and phone on these kinds of questions, is it really about respondents wanting to appear positive and optimistic to telephone interviewers?

I don't have the data to prove it but my hunch after looking at a lot of these studies is that social desirability is a major issue for many of our healthcare surveys and converting them over to Web will be problematic.  But for much of our satisfaction work I don't see a similar problem.  I suspect that the differences we see there, which often are quite small, are more about seeing a scale displayed on a screen rather hearing it read over the phone.  As I said, it's mostly a hunch.  More on that later.


Example of a Good Survey Solicitation

Lately I've been paying more attention to the survey requests that I get.  Here is one that came in mail.  It's a bit long but has all of the key elements of a good survey request.  Click on to seSnap1_2e it full screen. 

  Of course, I might modify it a bit.  I would not lead with all of that talk about health care costs because some people might stop reading expecting that it's a fund raising letter.  So I would put the survey stuff from the second paragraph up in the first paragraph, right after the first sentence.  But I still think this is a good strong letter and there is much here to emulated.


Gaining Cooperation

The toughest part of a telephone interviewer's job is getting the respondent to agree to do the survey.  As we have moved into survey modes like Web and mail where there is no interviewer that job of getting respondents to do our survey falls to the solicitation letter or email.  So writing strong and convincing solicitations is one of the most important new things we need to learn as we do an increasing number of these self-administered surveys.

The survey methods literature is replete with hypotheses and theories about why people choose to cooperate with or refuse survey requests.  Monetary incentives have certainly gotten lots and lots of attention and based on my quick, unscientific sampling of some of our respondent solicitation messages I'd say that we (MSI) have come to view it as not just the primary reason respondents cooperate but the only reason they cooperate.  So we send out lots of letters and emails that more or less say: "We are conducting a survey and we will pay you $$ to do it.  Here's how."

The survey methodologists will argue that the decision about whether to cooperate is much more complicated than a simple economic transaction.  People have questions like:

  • Who are these people?  Are they reputable or is this a scam?
  • Why was I selected?  Where did they get my name?
  • What's the survey about and why is it worth doing?
  • What are they going to do with the results?
  • By when do I need to do the survey?
  • What's in it for me besides the money?

This led me to mail survey maestro Don Dillman's Mail and Internet Surveys.  His advice is to include the following to encourage participation:

  • Be clear about the survey topic and make it seem as interesting as possible
  • Describe the goals of the study and what will be done with the results
  • Credential MSI and, if not a blind study, the sponsoring client
  • Give assurances of confidentiality
  • Be clear about the closing date of the survey
  • Specify the incentive, how, and when it will be delivered

And then I got an email invitation from Harris Interactive to do a customer sat survey for Microsoft.  I was impressed.  All of the Dillman elements were there including a link to a Web site where they explained their relationship to Microsoft and their privacy policy.

If we adopt this approach will our response rates skyrocket?  No.  But we might be able to generate enough improvement to lower our costs some, increase our response rates, and get to quotas more quickly than we do now.  The only way to know is to begin to do some well designed experiments in which respondents are randomly assigned to different letters and the response rates by letter are monitored.  I'd suggest just doing two cells per survey rather than trying to vary a whole lot of things at once but to hone an approach over multiple surveys.  It's inexpensive research that just might have some significant payoff, and clients will like what they see.


Mode Effects: Part One

It is a well established principle in survey research that the same question asked in different survey modes (e.g., telephone vs. Web, face-to-face vs. telephone, interviewer-administered vs. self-administered, etc.) will sometimes elicit a different pattern of responses.  This issue of mode effects is a complicated one and it has now emerged as a major point of focus within the industry as we look to convert studies to Web from other modes or take the half-way step of mixed mode.

This is a multi-faceted issue and in this post I want to just speak to the narrow topic of missing data, that is the frequency of non-substantive responses such as Don't Know or Refused.  I also want to focus on what is most important to us here at MSI, that is, the differences we are likely to see between telephone and Web.  Those differences often seem to come down to how you present your question on the Web.

On the telephone we require that every question have an answer, even if it's just the interviewer recording that the respondent refused to answer or didn't know the answer.  But even though we will accept Don't Know or Refused as an answer, those codes are almost never read to the R so they don't necessarily think of them as answers they can select.  Only interviewers can do that.  When we were developing our original Web Questionnaire Standards we carried over the principle that every question must be answered, but to be fair to the R (and consistent with the phone) we gave them the option of a Don't Know or a Refused on the screen, although we did urge the use of only one such code to reduce visual clutter.  This is consistent with one methodological school of thought that maintains that sometimes Rs may indeed not have an answer and if you force them to give you one they will just make something up or pick a response at random. Certainly not what we want.

There is another school of thought that argues Rs who choose a non-substantive response are satisficing, that is, not taking the survey task seriously, and therefore the non-substantive response options should not be offered.  They punt on the issue of whether you should require a response.

Both the literature and some work we have done internally show pretty clearly that if you put the non-substantive responses on the screen in a Web survey then Rs will select them more often than they will on the telephone where they are not read.  In some mixed mode tests we've done on an Energy study the Web produced twice the rate of non-substantive responses as the telephone.  You can reduce this effect by not displaying the non-substantive response on the Web.  Of course, you then need to decide whether to require a response or not.  A sensible compromise here might be to require a response to an attitude or opinion question but not require or even provide a non-substantive option on questions of fact or behavior where an R may legitimately not know the answer or refuse.  We have been revising our Web Questionnaire Standards in this direction.

Open ends are even more problematic.  For example, on the above-mentioned Energy study:

  • 46 percent of Web Rs provided no mentions to a set of open end questions as opposed to 0 percent of phone Rs.
  • 16 percent of Web Rs provided more than one mention to the same question versus 34 percent of phone Rs.

Unlike the options suggested above for closed ends, no good options come to mind on open ends.  When we have required a response we get lots of Rs giving us answers like "nothing."

We are using what we learn from the latest research and our own experience to evolve Web Questionnaire Standards that we believe will get us the best possible data.  But the science is still evolving and it's not always clear what course is best.

In subsequent posts I will take up the related issues of social desirability and visual vs. aural presentation.