Previous month:
July 2008
Next month:
September 2008

Posts from August 2008

Finding Duplicate Panel Respondents

Contributed by Colleen Carlin:

Just finished virtually attending a webinar titled "Detecting and Rejecting Duplicate Responses in Online Surveys" presented by Bill MacElroy of Socratic Technologies. This problem major risk with high incidence populations, but when we are after low incidence, frequently surveyed respondents the risk might be substantial.

Based on Bill's experience there are five key elements to examine prior to letting a respondent begin a survey that he claims will guarantee a 99 percent detection rate of duplicate respondents.

  1. Cookie match. This presumes that one routinely places cookies on respondent PCs and that the survey software always checks for it.
  2. IP address – which provides a geographic match
  3. Browser string . This refers to a standard set of configuration information that is easily captured off of a respondent's browser.
  4. Language setting of machine
  5. Client machine status – machine level data

All of these items provide a virtual 'fingerprint' of a particular machine and combined can be extremely effective at detecting duplicate respondents. At least according to Bill MacElroy.


More from 3MC

Several of the sessions on the last day of 3MC were devoted to the general topic of response styles in multinational research. While I was aware that the problem of people from different cultures responding differently (especially in scale usage) this was the first systematic discussion I had heard on the issues. Given that we typically want to standardize measurement and produce reasonably comparable results across countries, understanding response styles and how to correct them either through survey design or in post survey analysis is critical if we are to do good quality international research.

A response style is generally defined as the systematic tendency for people to respond consistently to a range of questionnaire items on some basis other than the content of those items (see, for example, Paulhus, D. L. (1991). "Measurement and Control of Response Bias," in J. P. Robinson, P. R. Shaver, & L. S. Wrightsman (Eds.), Measures of Personality and Social Psychological Attitudes (pp.17-59). New York: Academic Press.).

Response styles are most often reflected in the use of ordinal scales of agreement, satisfaction, favorability, etc. They are best understood by example. The table below describes the seven response styles identified by Baumgartner and Steenkamp (Baumgartner, H. and Steenkamp, J (2001). "Response Styles in Marketing Research: A Cross-National Investigation, " Journal of Marketing Research, 38, 2, p. 143-156).

Style

Description

Acquiescence

The tendency to agree with items regardless of content

Disacquiescence

The tendency to disagree with items regardless of content

Net acquiescence

The tendency to show greater acquiescence than disacquiescence

Extreme

The tendency to endorse the most extreme response categories regardless of content

Response range

The tendency to use a narrow range of response categories around the mean response

Midpoint

The tendency to use the middle scale category regardless of content

Noncontingent

The tendency to respond to items carelessly, randomly, or nonpurposefully, a.k.a. satisficing

 

There is a considerable body of research that shows different response styles predominate in different cultures. So, for example, American respondents generally show a stronger tendency toward extreme response styles (ERS) than do Asians. Respondents from Mediterranean countries often exhibit stronger acquiescence and extreme response styles than do those from northern Europe.

As survey researchers we have two options. The first, and hardest, is to use analytic techniques that help us to score individual respondents and then make statistical adjustments. This is tricky stuff, at least for me. The easier path would seem to be to design surveys to be response style neutral. One frequently advanced solution is MaxDif. While there is strong statistical support for use of MaxDif we've done some research that shows it's a tough technique for respondents to use, at least in Web surveys. Another approach involves experimentation with scales to identify those designs that seem less susceptible to differences in response styles. Number of scale points and full labeling versus endpoint labeling are the typical features of scales that get varied in experiments, but as near as I can tell there is not a whole lot of agreement as to best practice.


A Puzzle

The latest issue of POQ has a meta-analysis by Bob Groves and Emilia Peytcheva that looks at the impact of nonresponse rates on nonresponse bias.  It's primary finding is now familiar: we can expect significant nonresponse bias "when the causes of participation are highly correlated with the survey variables."  In other words, if we do a survey on a topic and we mostly get respondents who are interested in that topic then our results likely will be different than if we also had those respondent who were not interested in the topic and therefore did not participate.

It bothers me that I am unsure of how to think about the implications for market research.  In these days of abysmal response rates and online panels it is becoming increasingly common to make surveys more attractive to respondents by disclosing the survey topic when we recruit.  One panel company, Survey Sampling, even offers a service that as near as I can tell matches people from their panel to the survey topic at the time the sample is drawn.  In many surveys we screen respondents so that we only get people who already own a certain products or plan to buy certain products in the near future.  Is this a good thing?

The answer probably depends on what a client is trying to learn and what business decision is driving the research.  So if we do a survey on, say, health insurance products and mostly get people who are interested in health insurance, is there the possibility that the client might design products that are less successful in the marketplace than if we had interviewed a more balanced sample?  The purist in me argues that it always is best to get as broad and representative sample as we can get but this seems to get tougher and tougher all the time.  Maybe the best we do is be sure that we understand the impact of so called "topic salience" on nonresponse bias and be sure to raise it in our design discussions with clients.