Previous month:
September 2009
Next month:
November 2009

Posts from October 2009

The third rail of Resarch

Jeffrey Henning did such a nice job of blogging the ESOMAR Online Research Conference that I recommended his blog in a previous post.  But I've just read his summary piece on communities and worry that he has missed a key point, one that I tried to stress in no uncertain terms in my closing remarks. 

In virtually every conversation of any length about social media in MR and especially about branded communities the subject of marketing rears its ugly head.  The discussion in Chicago was no different.  The key point that way too many people don't get is that our ability to function as an industry is tied to our maintaining a clear distinction between research and marketing.  It's a line in the sand we dare not cross.  The third rail we dare not touch.

It was not that long ago in the US that we were sweating bullets because of the fear that MR would be subject to the then-pending National Do Not Call legislation.  It was widely understood that inclusion in the legislation would kill telephone research in the US.  We eventually were exempted because we do not deliver commercial messages.  Raise the same issue with European researchers and they will share their continuing fear of being lumped in with Direct Marketing by the EU and thereby suffering all sorts of restrictions that will strangle research.

MR is being transformed by technology and part of that has meant a whole new group of practitioners entering our industry, many of whom have yet to understand the long-term values of our industry.  While we welcome them we also need to educate them.  It's not a matter of some quaint ethical principles.  It's a matter of survival.

Have we reached a turning point in online research?


I am starting to believe that 2009 may be a watershed year in the evolution of online research. At least three important developments stand out.

First, after several years of widespread concern about panel data quality the industry appears to be arriving at something approaching a consensus about what constitutes a "good" panel, the steps that a researcher should take when choosing a panel provider, and the best methods for post survey cleaning to produce a useful dataset. The themes one sees unfolding in the ARF's Quality Enhancement Process are similar to those in other industry initiatives such as the ESOMAR 26 Questions and RFL's Platform for Data Quality Progress. Now we need to execute.

Second, there is the gradual recognition among the more thoughtful people in our industry that the academics were right all along in maintaining that while there are many challenges to conducting true probability sampling, when done right it yields more accurate estimates than the non-probability methods used by all but a few panels. If you've not been following the public debate on this you can pick it up here. No one is suggesting that we should stop using panels, only that we make wiser choices about when to use them, make more restrained claims about their representativeness, and improve our ability to interpret their results.

Third, the research industry is finally coming to realize the preeminent role of questionnaire design in research both as a guarantor of survey data quality and a means of preserving the public's willingness to do surveys. We will continue to have arguments about the best ways to present survey questions, but we know with certainty now that shorter and more engaging surveys are essential to the continued survival of online survey research. Once again, the key will be execution.

Taken as a whole these three developments can at last migrate online out of its "wild west" period and into something resembling a mature research method. Challenges remain and there are many who have yet to catch the wave, but this is most progress I think we've seen over the roughly 15 years since online first emerged. It's been a long time coming and a welcome sight.

Sometimes it’s the little things

One of the things that has always irked be about online is the poor graphical design of many of the surveys that find their way to my inbox. At one point we established a folder on our corporate network where people could dump screenshots of the especially bizarre. We called the folder, "Hall of Shame." The web offers us a vast palette of color and interactivity with which to design surveys, but as often as not it seems that people just make a mess of it.

Over the years we have done experiments with some aspects of design including background color, screen size, progress indicators, and location of the navigation buttons. Some of this research is reported in Mick Couper's excellent book, Designing Effective Web Surveys. We have used the results of those experiments along with what we can find in the web usability literature to create the corporate standards that we apply to every survey we do. Oddly enough, no element of those standards seems to be getting more questions from clients these days than placement of the Next and Back or Previous buttons.

The natural positioning in many people's minds is to mimic the Back and Forward buttons on the browser: Back on the left and Next on the right. But our research shows that it works best for respondents if we do the reverse: Next on the left and Back on the right. We've run two experiments and the results have shown:

  • Surveys can take longer to complete with the Next button on the right and the Back button on the left.
  • Respondents terminate at a slightly higher rate with the Next button on the right and Back button on the left.
  • Respondents terminate at an even greater rate when we don't give them a Back button at all.
  • If you put the Back button on the left respondents will use it more, probably by mistake.

Now I've also just stumbled across some research in the form design literature by web form design expert Luke Wroblewski that confirms placement and adds a new dimension I'd not thought of: color. He is trying to understand the best way to present what he calls "primary action" and "secondary action" buttons. In web form design this typically means Submit and Cancel. The results seem to be to be equally applicable to Next and Back. His findings:

  • People seem to work with forms most efficiently and with the fewest errors when the Submit button is on the left and the Cancel button on the right.
  • The worst design—the one that slows people down and causes the most mistakes—has the Submit button on the far right and the Cancel button on the left.
  • Using color to distinguish the buttons (green for Submit and red for Cancel) seems to work very well. Better still, make Cancel a link rather than a button.

Now this may seem like a silly little issue and not worth getting all worked up about, but as we finally begin to recognize the importance of creating an hospitable and easy experience for survey respondents these little things do matter.