Today I got a promo piece from a global online panel vendor. On the front page they describe their offering as "high quality representative and validated panels in Brazil, Russia, India, China and now Mexico." According to Internet World Stats the Internet penetration in these countries is 38%, 43%, 7%, 32%, and 27% respectively. I renew my plea: let's ban the R word.
Posts from March 2011
Mick Couper really likes pictures. He not only takes a lot of them he also has had an ongoing interest in how incorporating pictures into Web surveys affects how people answer questions. Way back in 2004 he and his colleagues showed that the frequency with which respondents reported certain types of events (shopping, going to sporting events, etc.) varied significantly depending on the picture displayed with the question. Pictures of infrequent events (shopping in a department store) depressed reports of shopping frequency while pictures of more frequent events (shopping in a grocery store) increased the number of shopping trips people reported. In subsequent research he and his frequent collaborators (Roger Tourangeau and Fred Conrad) showed that respondent reports of their health status varied when the question was accompanied by a picture of a healthy woman jogging versus a sick woman in a hospital bed. While the 2004 research showed assimilative effects (pictures with more frequent events led to higher frequency reporting) the second experiment showed how pictures sometimes can have contrast effects (people reporting being less healthy when a picture of an obviously fit person exercising is shown). So while it's clear that respondents react to pictures the direction of that reaction and its impact on how they answer survey questions is unpredictable.
In the current issue of POQ Mick has an article co-authored with Vera Toepoel that looks at whether verbal instructions can overcome the influence that pictures generate. Their research shows that well written questions with clear instructions can obviate the influence of pictures, even when the two are inconsistent. In other words, there seems to be a hierarchy of features in which Web respondents give much greater weight to verbal instructions and cues than they do to pictures and graphical images. In an additional analysis they find no evidence that including pictures in Web surveys creates a more respondent friendly survey, that is, respondents do not report a more positive survey experience with pictures than without.
As if we needed it, this is one more bit of evidence that there is no substitute for clear, concise, and well written questionnaires. Further, using pictures or other graphical images to try to increase engagement may well come at cost that is hard to predict or measure.
My morning email update from Warc includes one of those breathless headlines, "Germany takes to mobile web." The teaser goes on to tell us that "the number of German consumers accessing the mobile internet has almost doubled during the last 12 months." Clicking through to the full item there is a reference to a report from BITKOM estimating that 18 percent of Germans had logged into the Internet using a cell phone. (They claim "a representative sample" but with a scandalous lack of detail about their method; we can probably assume online.)While that's almost twice as much as a year ago it's still 18 percent. This is roughly the same as current estimates of Internet penetration in Syria, to pick one of many examples. I was reminded of a recent report from comScore estimating that 47% of US mobile subscribers were accessing the Web from their device. Assuming that "mobile subscribers" accounts for virtually the entire US population (it comes close) this is roughly equivalent to US Internet penetration in the late 1990s.
So for researchers at least, mobile is still very much a niche, a rapidly growing niche but a niche nonetheless.
In the beginning we had Sugging and shortly thereafter came Frugging. (The correct capitalization on these terms has always eluded me.) Mercifully we strayed from this convention when we discovered push polls. The question before us: what should we call what newsmax.com is up to?
Here is the deal. They send out survey invitations to email lists of unknown origin ostensibly to get opinions about some current issue. If you click their link you actually get a choice of four or five surveys. Each one is five or six innocuous questions about Obama, Palin, Healthcare reform, etc. They have a standard intro that tells you that newsmax.com is a leading online news organization whose results are reported on all of the big media outlets, and the survey topic is always Urgent. (My favorite was that this was the first ever online poll conducted about Barack Obama.) At the bottom they ask for your email address, otherwise your "vote" is not counted. Give them your email address and the conservative propaganda starts flowing to your inbox.
But it doesn't stop there. As it turns out, they will accept a blank survey as long as you give them your email address. Then you get an offer for a free Dynamo World Band Emergency Radio if you will just subscribe to their magazine which they say features "exclusive stories the major media won't report." Doubling down on the paranoia they tell us that the Department of Homeland Security's advises that we all have a radio.
So it turns out that it's just Sugging after all.
Kristin Luck writes a column for RBR and for two issues in a row she's worked the worry beads about what she sees as declining interest in online panel data quality. In the current column Kristin draws heavily on a reaction to her first column by Jackie Lorch. Jackie summarizes two legs of the online data quality stool (respondent validation and better survey design). Kristin points out that there is still a problem, quoting an anonymous client who says, "There hasn't been the progress with online research that many in the industry think. We have internal tests that clearly demonstrate that."
As I have written in this space before, respondent validation and better survey design while necessary are not sufficient to really fix the problem with these sample sources. We also must address the third leg of the stool: the selection bias built into pretty much every online panel and intercept method. Every time the issue of online panel data quality comes up that graph from the first stage of reporting of the ARF Foundations of Quality Initiative showing all of the variation in results across the 17 panels pops into my head and I ask myself whether improved respondent validation and better survey design can make this problem go away. I think not.
We need a lot more focus on the third leg of the stool and there are encouraging signs. More and more researchers are concluding that simple weighting by demographics is not enough. Propensity weighting has been widely discussed although not broadly used, and while evaluations of it in academic journals have not been all that positive additional experimentation could produce a breakthrough. More promising, I think, is experimentation with new approaches that focus on the sample selection stage and the use of high quality probability surveys (.e.g, the American Community Survey, the General Social Survey) as a reference point for selecting representative samples from single or multiple panels.
Kristin may be worried but I am more optimistic than at any time since online panels first came on the scene almost 15 years ago.