Previous month:
November 2006
Next month:
January 2007

Posts from December 2006

The Online Panel Rationale

The Barcelona conference was mostly a gathering of panel companies gathered together to talk about their business.  Randy Thomas of Harris Interactive and Programme Committee Chair wrote an interesting "editorial" that was delivered as a sort of Foreword to the set of published papers.  It provides a good concise statement of the rational for panels that might be subtitled, "Why We Need To Do This And Why It's OK." 

The "Why We Need To Do This" part is familiar to all of us.  For a whole lot of reasons response rates in traditional modes have fallen to the single digits and in the process surveys have gotten significantly more expensive, take longer to execute, and are less and less representative of the population.  Panels were invented to solve these problems by making the it faster and cheaper to interview lots of people.

The "Why It"s OK" part is tougher, but Randy argues that we can assure generalizablity (defined as "congruence in results between modes and sample sources") if four factors are present:

  1. "The topic being measured is not closely related to how a person was chosen to be in the sample."  In other words, don't do online surveys about Internet or online issues and expect your results to reflect the population.
  2. "A sufficient number of respondents participate."  This, of course, is the traditional argument that the larger your sample the less error in your results. Probability sample devotees call this "The Literary Digest Fallacy," a reference to one of the colossal failings in the history of survey research, the botched call of the 1936 US presidential election.  So, yes, larger sample sizes are better than smaller sample sizes, but a large sample alone does not guarantee that  it represents the population of interest.
  3. "Respondents are sufficiently diverse and in general proportionate to the people to which we seek to generalize our results."  It's hard to argue that point, and there probably is a corollary about post stratficition adjustment (aka weighting).
  4. "Respondents have sufficient motivation and cognitive capacity to complete a self-administered survey." 

This last point is looking increasingly like an Achilles heel for the panel industry and it occupied a good deal of attention at the conference.  Research buyers, which is to say clients, have pretty much accepted the "Why We Need To Do This" rationale and points 1 thru 3 above.  But doubts have begun to emerge around number 4, the panel companies know it, and the smart ones are doing their best to get out ahead of it.  So far their efforts have been less than convincing, and I expect that things will get worse before they get better.  But in the end the responsibility for the quality of the research we deliver to our clients rests with us as researchers, not with the panel companies.  Interesting times.


Are All Panels the Same?

This is one of the most pressing problems of the day and the answer seems more and more elusive.  Way back in 2005 a team at Stanford led by Doug Rivers (co-founder of Knowledge Networks) and Jon Krosnick (father of satisficing) compared seven different US panels and RDD telephone to look for straightforward mode effects and comparability across basic socio-demographics.  Without going into all the detail, the basic findings were very reassuring.  While there were differences, they were not so great as to set off any alarm bells. The results seemed to say that there are few differences to worry about among the major national US panels.

Unfortunately, things have looked different in practice.  On the few studies in which we have compared Web panel results with RDD telephone we have been disappointed, both on behavioral questions relative to health care and general attitudes about current issues. 

So it was with interest that I heard a paper by Ted Vonk, Robert van Ossenbruggen, and Pieter Willems at the Barcelona conference in which they described the results of a study of 19 online panels in The Netherlands.  But before I describe the study I must note that The Netherlands is a bit different from the US.  Internet penetration is at 80% (vs. less than 70% in the US) and broadband penetration is the highest in the world (63% of households).  Just the fact that they have 19 panels in a country that small says something!  The study used a sample of 1000 from each panel and they executed about a 12 minute survey simultaneously across all 19 panels.  The questionnaire asked about basic socio-demographics, some political attitudes, brand and advertising awareness, and Internet behaviors.  They key findings:

  • The response rates varied from 18% to 77% with an average of 50%!  Amazing by US standards.  The panels with the lowest response rates were those that do not drop non-responding panel members on any kind of regular basis.
  • Newer panel members respond better than older members.
  • Panel members recruited by traditional methods (from other surveys, directly solicited, etc.) respond better than those who self-select via banner ads, links, etc.
  • Lottery incentives seem to generate lower response.
  • Differences in response rate did not translate into differences in the survey data.
  • Panelists were generally representative on socio-demographics but not representative in terms of political attitudes as measured by recent election outcomes.
  • Almost two-thirds of respondents belonged to more than one panel and multi-panel membership was highest on panels using self-selection methods.

So what's the takeaway here?  Well, I think two things.  First, while panel recruitment and maintenance practices make a difference in response rates and the likelihood of encountering professional respondents who do lots of surveys and belong to multiple panels it does not appear that this creates samples with alarmingly different socio-demographic profiles.  However, there do seem to be attitudinal differences and that should be very concerning to us.  Understanding those differences and figuring out how best to deal with them is a huge challenge, and there is much work that still needs to be done.

And, of course, remember what I said at the outset:  this is The Netherlands and things may be quite different there.


Hot Air in the Windy City

Back in September with much hype and fanfare a group of 33 “industry leaders” gathered in Chicago for a “Respondent Cooperation Summit. Now I want to make clear at the outset that I didn’t attend, so everything I am about to write here is based on what I have read and heard since. I didn’t boycott, just had a scheduling conflict I could not resolve. But I must admit that after seeing a warm-up with lesser lights at the CASRO Technology Conference in June I was less than enthusiastic.

So what went on Well, as you might expect with a group of MR company execs, a sprinkling of consultants and a few high profile clients, not much of any substance. There was lots of hand wringing, a generous supply of hot air, some pointing of fingers (especially at clients for always wanting faster and cheaper), and no real solutions other than the usual platitudes about buyers and suppliers working together, industry quality standards, improving the respondent experience, blah, blah, blah. The most dramatic moment of the gathering was supplied by Kim Dedeker, VP of Market and Consumer Knowledge at P&G, who shocked attendees with her critique of online research by calling its representativeness into question and raising the spectre of professional respondents. She even pointed at one specific example at P&G where online research was just plain wrong.

Other then raising the angst level across the industry did any of this do any good? By itself, I don’t think so. But it could be one more small step toward sensitizing clients to research quality issues and there by create more receptivity to arguments about quality and value rather than just price. As I have written elsewhere, clients are beginning to tune into the quality issues that are emerging around online panels, and while the panel companies all profess an obsessive quality focus the reality is that the responsibility for ensuring quality lies with the researchers, not the panel companies.

But don’t expect any sort of massive rejection of online and a return back to telephone any time soon. The online juggernaut still has a full head of steam and the challenge is how to do it in a way that produces results that are, as Dedeker says, "replicable and predictable."


Survey Geek Eats Crow

About six months ago we had a client do some major hand wringing about bad behavior by Web panelists.  She had been to some sort of session by people from the Burke Institute who had convinced her that Web panels were full of people misrepresenting themselves to qualify for surveys and then once in clicking through mindlessly to get the incentive.  She came armed with some Burke-inspired ideas for identifying these people so we could delete their survey data prior to analysis. I responded with an eloquent and convincing defense of Web panel quality.  Sure there are some bad apples, but not enough to worry about if you are sure to be working with a quality panel company.

Now I've joined with a friend and his colleague to write a paper on this very topic, citing even worse behaviors than what the client brought to us,and advocating even more extreme measures to root out these scoundrels.  I have posted the paper here:  http://www.marketstrategies.com/rbdocs/Downes-LeGuinMechlingBaker.pdf .  I'm just glad that I had the good sense not to blog on the topic at the time.  At least I don't think I did.  In general, the problem is worse than I thought but not so bad as I feared.  And dealing with it is fairly straightforward.  Read the paper.


Let's Not Kill the Golden Goose Who Lays the Golden Eggs

This is a take-off on a paper title from the ESOMAR Panels Conference.  The golden goose is, of course, the panel respondent.  The theme of this paper and another as well is all about treating respondents better in online surveys.  Pete Comley, a Brit who worked as a psychologist before running a company called Virtual Surveys, authored the second paper and framed his talk around Transactional Analysis (TA), a sort of popular refashioning of Freud that had its 15 minutes of fame back in the 1960s.  The short description is that we interact with others and they with us in one of three ego states: Parent, Adult, and Child.  As long as both actors are in the same ego state the interactions are happy and you produce "warm fuzzies", but when you get crossed transactions you get "cold pricklies" and counterproductive behavior.

Sadly, our tendency in Web surveys is to create crossed transactions and cold pricklies.  We are always directing and correcting respondents, and sometimes in not very friendly ways.  Or, the the language of TA, Parent->Child.  No adult enjoys being treated like a child so most Web surveys are handing out cold pricklies.  No wonder we have high termination rates and satisficing! Comley's advice is to work harder at creating Adult<->Adult transactions with respondents.  He has seven suggestions:

  1. Create email communications (including survey solicitations) in Adult<->Adult style.
  2. Handle screenouts gently and with friendly, rational explanations for why we are not interested in their opinions.
  3. Make our error messages friendlier.
  4. Allow respondents some freedom to select which questions they answer.
  5. Allow respondents to add their own questions or answers.
  6. Allow respondents to continue discussing the topic after the survey is complete, say, by directing them to an online bulletin board.
  7. Create community forums on specific topics and point respondents to them.

To be honest, my enthusiasm for these drops off some as I move down the list but I love the general concept. Unfortunately, Comley did not come with any data to convince us that working harder to produce warm fuzzies in respondents has clear benefits over the cold pricklies we seem to be creating now.  That would be an interesting experiment.  In the meantime, the least we might do is have a close look at our error messages.