Previous month:
February 2008
Next month:
April 2008

Posts from March 2008

Some Days are Better than Others

One of the downsides of being a compulsive conference goer is that you tend to hear essentially the same paper, although by different people from different companies, on more than one occasion.  One of the ways in which the MR research on research world differs from that of academic methodological research is the complete isolation in which people seem to work.    In academia, researchers are expected to be familiar with and to cite the work of others doing similar work.  Market researchers seem to feel no such obligation and so it's not unusual to see similar research designs executed and reported on by different people at different points in time with no point of reference in terms of previous research findings by others.

It's in that vein that I report on some work presented by a researcher from Lightspeed at the recent CASRO Panels Conference looking at how panel response rates vary based on the day of the week when the initial invitation is sent.  While the authors do not reference previous work, having seen a number of other similar presentations over the last few years I think there is a consensus evolving around this issue.  That consensus seems to be that the invitations mailed earlier in the week generally produce a higher response rate than those mailed later in the week.

Now I admit that mostly I have ignored this topic because a response rate on a convenience sample is a pretty meaningless thing.  Why does it matter?  More recently it's occurred to me that it does matter if you are trying to access a low incidence population or the panel is known to have a limited number of panelists, say, in a smaller geographic area.  So there are conditions under which panel response rate does matter because it ultimately could determine whether you can meet quota.

The impact of day of invitation as reported by Lightspeed was not dramatic, falling gradually over the week from a high on Monday afternoons of 39 percent to a low on Friday afternoons of 28 percent.


 

As a practical matter, how important is this difference?  Well, let's assume that the panel only has 3000 potential respondents for the demographic of interest.  A Friday launch will get you about 330 respondents fewer than a Monday launch.   Suppose further that you have other qualifications that are going to yield you an incidence of 30%.  Under those conditions, the Monday launch would yield 351 completed surveys while the Friday launch just 252 surveys.

In most instances there is a lot of panel sample available and we don't need to worry about this issue.  However, when we are worried about whether a panel can provide us with enough sample the day of the week could well be the difference between hitting a quota and falling short.

One final note.  None of the studies I've seen look at Saturday or Sunday launches because panels in general discourage it.  But the trend in these data suggest that we would see still lower response.  It would seem that there is little advantage in launching on Saturday rather than waiting until Monday afternoon


Online vs. Phone

I'm here in Hamburg at the 10th General Online Research Konferenz or simply GOR08.  The morning featured a keynote from Randy Thomas, a veritable Web research machine from Harris Interactive. His talk was mostly around research he has reported on previously, but he did make four interesting observations about online as compared to telephone that I think represent something of a consensus to which the MR industry has evolved:

  1. Mostly, online compares favorably with telephone even without weighting schemes like propensity scoring.  He didn't qualify that much but I would add the qualifier: "if you are working with online samples balanced to reflect the basic demographics of the population."
  2. One clear exception is attitudinal scales where online consistently delivers lower scores than telephone.  This most people ascribe to differing use of scales in the visual as opposed to aural mode.
  3. Another clear exception is topics with a social desirability bias.  It might be something as simple as health status or as complicated as illegal acts, but anytime a survey asks a question the answer to which might put us in an unfavorable light  we are more likely to be truthful in the relative anonymity of a self-administered survey like Web than when a human interviewer is involved. For example, I have seen a number of comparisons that consistently show higher rates of smoking in Web survey than on the telephone.
  4. The final exception is topics that might correlate with people's proclivity to go online in the first place.  This might be attitudinal or demographic.  For example, we might expect serious differences in Web vs phone if the survey topic were attitudes toward personal computing, or where we are interested in demographic subgroups that we know have low Internet penetration (e.g., elderly or minorities).

These four observations are probably a pretty good framework for thinking about when a Web survey is a good idea and when it is not.  That said, there is no real underlying science with Web in the same way we have probability theory underlying telephone so we should always be wary of exceptions.  Although Randy did not tie the two together he made the observation that having been trained as a psychologist his instincts were always toward replication, that is, find the truth by repeating the survey over and over but with different types of samples.  He was surprised when he got to Harris and the political science types there saw the way to the truth through survey once and then weight.  There probably is a lesson in there for how we should be thinking about online surveys.