Previous month:
March 2007
Next month:
May 2007

Posts from April 2007

Impact of the Federal Do Not Call Registry

When the Federal Do Not Call Registry was established in 2003 there was considerable speculation in the research industry around its likely impact on survey participation.  Some people hoped that an overall reduction in calls going to registered households might increase the likelihood that people would start answering the phone again and agreeing to do surveys.  There was a little bit of research on this but it was largely inconclusive.  Western Wats did a phone study and concluded that survey cooperation would improve.  I administered the same questionnaire to a Web study and was less sanguine about the future participation.  But both studies seemed to say that people (1) didn't always understand what kind of calls were covered and (2) didn't make much of a distinction between telemarketing and research.

Now we have a good quality study of the real impact three years hence (Michael Link, Ali Mokdad, Dale Kulp, and Ashley Hyon, "Has the National Do Not Call Registry Helped or Hurt State-Level Response Rates?" Public Opinion Quarterly, 70, 794-809).  The survey is the Behavioral Risk Factor Surveillance System (BRFSS), an ongoing federal study somewhat like our own PULSE study that interviews every month in all 50 states.  The study looked at response rates by state for the period from January, 2002 through June, 2005.  Their key measure was the correlation between the percentage of households in a state registered with the DNC and the monthly BRFSS response rate for that state. 

Their conclusion is that the  DNC has had no measurable affect.  It has neither helped nor hurt survey participation. Response rates have continued to fall over the period they studied.  They call for continued monitoring and express some hope that a reduction in unwanted telemarketing calls eventually will have an impact.  But it's just that: a hope. 


Web Panels: What Are They Good For?

This is essentially a query in my email today.  Its full text: "MSI has been using Internet panels such as e-Rewards for consumer research.  What should MSI's response be to clients who question the bias associated with using these panels? "

A good place to start is to conceive of an Internet panel as a sampling frame with some serious coverage problems.  It can only contain people who have Internet access and right now roughly a third of US households don't.  That would be manageable if Internet access were distributed randomly in the population but, of course, it is not.  In general, the elderly and lower SES groups have less access than other demographic groups and so we have one obvious level of bias.  But panels also do not include everyone who is on the Internet, only those who have volunteered to do surveys.  And so we have a second level of bias.  Not only do these panels not  represent the full US population, they also don't represent everyone on the Internet.  In sample speak, every member of the population does not have a known and non-zero chance of being selected for a survey, so there is bias.  Even the most enthusiastic online survey evangelist will admit this.  Some, like Harris Interactive will claim that they understand that bias well enough to correct for it, but that's a pretty tall order.

Acknowledging the bias in Internet panels does not necessarily mean that we should not use them, but we should be sure to consider the bias in the context of how the client wants to use the data and to make sure there is a good fit.  In general it probably is a bad idea to use an Internet panel as the source when your goal is to accurately estimate some characteristic of the population such as satisfaction, product ownership, or brand awareness. Since the sample is not truly representative, you can't expect to generate an estimate  equal to the true value in the population. 

But an Internet panel might be a good choice if, for example, the client is interested in how individual characteristics correlate with satisfaction, how different attitudes and circumstances affect product purchase decisions, or what types of brand attributes appeal to different kinds of people.  They also can be helpful in studies that trend changes in attitudes and behaviors over time, even though the absolute measures for those attitudes and behaviors might not match up well with studies using true probability samples. 

And while sample bias is an important issue, other factors such as cost, turnaround time, the complexity of respondent task, or the need to present images or other graphical material are important considerations.  There are some types of studies that simply cannot be done in a survey mode other than Web.

Finally, the size of the geographic area to be studied can also be a factor.  Many panels simply are not large enough to provide sufficient sample for smaller geographic areas such as MSAs .  Or a panel may have to use every bit of sample that it has for that area, and in the process create a much more biased sample than would be created for a national study.  Most panel companies offer the option to control demographic bias by creating samples that are balanced to Census demographics.  For smaller geographic areas, this is sometimes impossible because they just don't have enough sample to start.

The academics would argue that the advantages or probability sampling outweigh even the disadvantages of the very low response rates that have become the norm in telephone surveys.  In some absolute sense, this might be true.  But our design decisions should be guided by a wider set of considerations foremost of which is its ability to provide the client with valid, actionable results at a price s/he can afford and in a time frame where those results can be useful.





"The Narrow Path Betweeen Science and Entertainment"

The quote in the title is from a presentation I heard at GOR by Holger Lütters from a German company called webAHP.  His general point was that online surveys are mostly pretty boring affairs, that it shows in the relatively high levels of satisficing we sometimes see, and therefore we need to do more to make completing an onlilne survey more of an engaging experience.  He showed a variety of different scale presentation styles, all very graphic and colorful.  You can see examples here .  Their pièce de résistance is something they call "the Sniperscale." Sniperscale_2 You can see a static image on the left but to really experience it go to the Web page link above and to the very bottom of the page.  As you will see, this is an obvious attempt to bring video game features to Web surveys. 

I am undecided about whether this is a good idea and there has been almost now testing of how this works in practice.  One thing for sure is that we need to some new thinking to deal with the satisficing problem in Web surveys.  As we begin to measure this in some of our surveys we are discovering worrisome levels of straightlining and evidence that too many respondents simply are not reading the questions in our grids carefully.


The Perils of Going from Phone to Web

One of our Energy clients recently found herself in a tough spot.  We have been doing their customer sat research for years and putting those results in the context of our benchmarking studies to show how they rank compared to other similar utilities.  All of this work has been done by phone.  Now along comes J.D. Power with a study that shows their rank to be considerably worse than what we have been telling them and her management is asking tough questions.  The J.D. Power study was done by Web using a variety of online panels.  So she called us to get our take on this.  It caused me to write up a brief email summarizing what I see as the issues.  An edited version of that email follows.

First, in any transition of a study from phone to Web one has to be concerned about mode effects, either because of the change from aural to visual or because of the presence or absence of an interviewer.  In the case of the former there is some research to suggest that on long scales respondents in the visual mode will tend away from selecting extreme responses.  In sat surveys this can lead to lower scores on the Web than on the telephone.  We have tested that on a key accounts study for a Utility and not found it to be a problem.  The presence/absence of an interviewer can lead to social desirability effects, and while there is substantial evidence of this in areas such health or other sensitive topics I know of little evidence that it operates where satisfaction is concerned.  Bottom line is that the differences you are seeing are probably not due primarily to mode.  Or at least that's my view based on what I know at the moment.

Second, sometimes these transitions from Web to phone are more than just mode changes, they also can be sampling changes which is what you are facing.  Switching from a probability sample randomly drawn from your client list or from a high coverage frame such as D&B to a volunteer Web panel is very challenging.   In a probability sample from a good frame everyone has an equal chance of being selected and therefore you are more or less guaranteed a representative sample.  But with a Web panel, significant elements of the population are under represented or not represented at all.  And while the Web vendor can draw a sample that may appear to represent your customers on some key characteristics, they can't ensure that the sample is representative on all of the things that might matter--such as attitudes and behaviors--and so there is bias.  Understanding that bias and measuring it so we might adjust for it is extremely difficult.  And in fact, the academic survey community is largely in agreement that even in an age of very low response rates, the advantages of probability sampling outweigh the disadvantages of a low response rate.   Bottom line here is that I think the differences you are seeing are probably due to the shift in sampling strategy rather than mode.

Finally, there are important issues with the panels themselves that go beyond the fact that they are non-probability, convenience samples.  These are people who have signed up to do surveys.  Sometimes they are not who they pretend to be and sometimes their extensive experience as survey takers leads them to answer differently from respondents who may do only one or two surveys a year.   Using them well to produce valid research requires a strong QA methodology.  I would be interested to know who J.D. Power has used as a sample source and what steps they took to clean the data.


Left/Right or Right/Left: Does It Matter?

The third and last issue that Mick and I took up was location of the navigation buttons.  The MSI standard has been to put the Next button on the left and the Previous button on the right.  We do this for three main reasons:

  • Web users tend to read the page with a left side bias and placing the Next button on the left makes it easier to find.
  • The Windows standard is to put the most often used functions on the left side.  Think of the menu bar in Word and how often you access the drop downs going left to right.
  • The default tab order sequence will activate the button on the left after a radio button is clicked.  Hitting return will "press" that button.  In earlier work we found that placing the Previous button on the left resulted in respondents backing up more often when it was placed on the right and the Next button was on the left.  We believe this was accidental and undesirable.
  • An experiment we ran a few years back suggested that respondents are indifferent on the issue, adapting quickly to whatever placement we presented.

In this research we had four experimental conditions:

  • Next on the left and Previous on the right
  • Previous on the left and Next on the right
  • Both on the left with Next above Previous
  • Next on the left and no Previous button

The effects we got were not necessarily dramatic, but they caused us to change at least one standard while reinforcing others:

  • Taking away the Previous button resulted in more breakoffs.  The difference was only about 3.4 percent but it was statistically significant.
  • We replicated the results of the earlier study and saw a very slight but statistically significant tendency for respondents to back up more when the Previous button was placed on the left.
  • None of the various treatments produced different results in the debrief questions that asked about enjoyment of the survey, ease of answering, length, etc.
  • When the Next button was placed on the right completion times were slightly longer (16.1 vs. 15.2 minutes).

Taken as a whole, there is not much here to cause us to change our current standard.  Snap1_2 However, when we looked at how respondents answered grid questions with long scales we saw some evidence of an interesting pattern.  When the Next button was on the left there was a slight tendency for the distribution of responses to lean to the left across an 11 point horizontal scale.  When the Next button was placed on the right the distribution leaned slightly to the right.  And so, placing the Next button on the right sometimes produced higher mean scores than placing it on the left.  Why?

Two reasons come to mind.  First, we know that eyetracking studies show Web users read in an F-shaped pattern, that is, focus more on the left hand side of the screen than the right hand side.  By placing the Next button on the right respondents are forced to look there more and may end up processing more of the screen than they would normally and therefore are slightly more likely to select responses from that area of the screen.  Alternatively, respondents may simply be saving "mouse miles," finding it easier and less effort to select responses from the right side when the Next button is placed there.  We can't tell which of these may be operating here.  We probably need an eyetracking study to figure that out.

For now, moving the Next button to the right side of the screen seems like a risk worth taking.  After all, one of the consistent problems with Web surveys is their tendency to yield lower satisfaction scores that telephone surveys.  If moving the Next button to the right makes it more likely that the respondent will consider the full scale, that may be worth a little accidental backing up, especially since our debrief questions showed no obvious preference.