Previous month:
February 2013
Next month:
April 2013

Posts from March 2013

CASRO Online -- Final thoughts

I’ve just realized that I never posted by last comments from CASRO Online.  Shame on me.

The organizers took the risk of placing three papers on mobile questionnaire design as the last three papers of the conference and on a Friday afternoon to boot.  The risk paid off as the room was probably around three-quarters full.  The last three papers (from Maritz, Burke, and Market Strategies) were all reports on experiments with different ways to present questionnaires on small mobile screens.  All three were very good papers.  After what seems like an eternity of hearing conference speakers talk about the challenges of the small screen and, in some cases, propose some potential ways of overcoming those challenges, it was great to see ideas driven by and tested with data.

I’m not about to announce, “And the winner is. . .”  There clearly is more experimentation to be done and we can expect best practices to evolve over the next few years.  But these papers were a great start.  Put them together with the two papers on imputation and data fusion from the first day and you have an excellent starting point for thinking seriously about mobile questionnaire design.  There were other good papers at the conference, but to my eye these five papers on mobile where the reason to be there.

I started this series of posts by describing this conference as less a place for new ideas than a place to hear the grind-it-out stories of how new ideas are implemented in the real world.  God is indeed in the details.  It’s fun to listen to predictions and visions, to argue about where the industry is headed and how to avoid the fate of the dinosaurs.  But it’s also more than a little tiring to continually hear the equivalent of “Just Do It.”  Getting from here to there is harder than it looks.  One overriding message from the opening keynote to the last session was that change happens slowly in MR.  Just doing it is not enough.  It needs to be done right.


CASRO Online – Part 4

Day 2 of CASRO Online. John Bremer, Conference Co-Chair, has promised "a riveting day." I'm ready for that. He's reviewing yesterday's session and makes it sound better than I remember it. Maybe it's me. To my eye, it had a really strong start and sort of drifted downhill a bit. But still better than some other conferences I've attended over the last couple of years.

First topic this morning is long surveys. The underlying premise is the sad realization that clients are just not going to accept shorter surveys. So what to do, especially for mobile? My buddy Frank Kelly is leading off and he's got data showing that people's tolerance for long surveys declines as you go from a PC to a tablet to a smartphone. So this presentation is going to be about "chunking" surveys into respondent-friendly pieces, and then putting them back together ("fusion"). Two kinds of chunks: within the same respondent (presumably over time) or across respondents (maybe at the same time). This is serious business and it takes a lot of thoughtful planning. It also requires some modeling to understand the structure of the overall questionnaire and the "hooks" that enable you to chunk it out and then fuse it back together. To be honest, I've not followed the discussion other than to note that there seems to be some Bayesian modeling involved. Maybe I'll get it when I read the actual paper. But I think this is important stuff for us to be paying attention to. I'm not sure that the within respondent approach has legs because in the end, the same respondent still has to do it all. But it might be really interesting if it can be done across respondents.

Now Frank is wandering into potential heresy by wondering whether all questions need to be answered by all respondents, routing aside. Do we need the same number of completes for every question? If you can sell that to a client it's another and much simpler way to shorten surveys, but my guess is that it would be a tough sell to clients.

A new presentation with a somewhat similar theme from folks at Gongos and SSI. Their focus is chunking across respondents and dealing with the missing data that you inevitably accumulate in the process. What I already like about this presentation is that the hooks they want to use to put humpty dumpty together are attitudinal and/or behavioral questions rather than demographics. Good choice! So this is really about respondent matching. Cool. They also tried hot deck imputation and it seemed to work as well. Bad news: matching by demos seems to work better than matching by attitudes/behavior. Strange.

Overall, it seems to me that there are two other ways to deal with the problem of downsizing questionnaires for mobile. The simplest and most obvious is to just design shorter questionnaires. We ask a lot of questions that just don't need to be asked, but convincing clients of that has generally gone nowhere. The second option is using surveys to ask a few questions to supplement respondent profiles already built from big data. This is the "getting to why" approach. Things may eventually evolve in that direction, but not soon enough. Modularization of chunking looks to me like the right next step. There is some literature on this sort of thing over in the scientific side of the industry that people who are working on this problem should deleve into.


CASRO Online – Part 3

The afternoon session has launched and we are going to be hearing about the data privacy issues inherent in data integration. Presenters are from J.D. Power and SSI. They did a multi-mode study (online panel, web intercept, and RDD) in three counties – US, China, and India.

They have done a nice little review of some previous data collected by Allan Westin showing that concerns about privacy in the US have escalated with time. 80%+ think they have lost all control of their personal data. So people have been paying attention after all. Their study also debunks the myth that younger people don't care about privacy. One key defense mechanism they use: lie about personal data.

So there is little evidence in this study that concerns about privacy are eroding. If anything, they are elevated.

Next up, Carol Haney talking about mobile and data integration, specifically Facebook as it turns out. She's started by reminding us that people are using mobile for all kinds of things – purchase decisions, social media posting and even even surveys not necessarily designed for mobile. To explore this they did a survey of millennials about alcohol consumption. They asked Rs if they could scrape their Facebook data and found that the older the R the less likely he or she was to agree to have FB scraped. But when they could scrape and added in the FB information they were able to do some interesting segmentation work along with some nice storytelling. Promising stuff.

The next presentation looks really interesting – combining website data and surveys – but I have a conference call back in my room. Lovestats is sitting behind me so check out her blog if you are interested.


CASRO Online – Part 2

Back at CASRO. First up is Craig Overpeck and he's going to talk about ISO. I'm a huge ISO fan and have blogged about it many times. I'm going to let this opportunity to do it again pass.

Now there is a panel of mostly former Harris Interactive employees who are going to talk about DIY. Efrain Ribeiro from Lightspeed is up first. He thinks DIY companies often are innovating and a place to look at new ways of doing things. Now Phil Garland from Survey Monkey. His argument seems to be that designing questionnaires and fielding them is an automation task that MR should embrace. Eventually it's an argument about where MR adds value. Next up is Ryan Smith from Qualtrics. I'm not sure what his point was. Now Randy Thomas from GfK, whose wife seems to understand the central issue, i.e., Randy's job may be at risk. Maybe she should be on the panel. But Randy is talking about the key issue having to do with the expertise needed to design and field a good questionnaire. That tends to be my argument as well, but one could make an argument that MR firms are not as good at these things as they should be. Finally, George Terhanian from Toluna. He told three stories that seem to come down to, "DIY is a nice idea because it can automate and democratize research."

So we have two people who run panels (not much of a threat from DIY and could probably go to DIY in a heart beat), two people who sell DIY services or software, and Randy Thomas who has spent his career studying questionnaire design. Two guys in hard sell mode, two guys who probably don't care much, and my buddy Randy.

Is it time for lunch yet?


CASRO Online – Part 1

I'm at the CASRO Online Conference in San Francisco. This is one of my favorite conferences because it's all about people trying to make things work. It's generally not about grand pronouncements and dire warnings about the future. It's a conference that features people doing research on research, running experiments, and trying to solve the real world methodological challenges that we face today. It's taking new methods to the next step, smoothing out the rough spots, and making the practice of MR better.

All that said, Gayle Fuguitt, the keynoter, wants to talk about 2020 and beyond. Her key concept seems to be "the voice of authenticity." Technology now provides tools for people to express themselves, either indirectly (big data) or directly (social media). The industry's challenge is to create the roles to make sense of it for clients.

Second big idea: Clients like trusted partners and are disinclined to change. They also have norms they like to see maintained. So the opportunity for MR companies in trusted partner roles is to bring new solutions. She sees consolidation of "traditional MR methods" augmented by new online methods (communities, social media, etc.) driving growth. Above all, don't give up on delivering real value. So it's not a replacement strategy, it's an augmentation strategy. And it's based on leveraging a long-term relationship and continuing to deliver value.

This has been a sane, reasonable view on the evolution of MR. No hyperbole, just good, thoughtful advice based on 30 years in the industry. Nice.

Next up is Pete Cape, a guy I quote regularly on a whole bunch of topics. His topics is inattentive respondents. He's run an experiment with grids in which he put some classic traps -- "click the response at the far right." He also had a version where he broke the grids into individual screens. His general point seems to be that people just have cognitive lapses. They lose focus on the task. Their minds wander. Nothing nefarious about it. But then people have different heuristics for answering when they don't have an answer. Some will choose the same answer (modal answer) all the time. Other wills use the last answer (frequency bias).

Overall, what Pete sees is about a 5% failure rate on the classic traps when there are separate questions. But the failure rate is much higher when the trap questions are in the grid. That 5% is normal, according to the cognitive psychology literature. So the problem here is not bad people, it's bad design. The grids are the problem.

As I was listening it occurred to me that his is just another chapter in the ongoing saga of finding the reasons why online has sometimes produced bad results. The usual suspects are online questionnaire design and the respondents, some of whom are gaming the system to get the incentive. Cheaters. But there is a third suspect that gets too little attention: convenience sampling from sources of unknown quality. I had thought that the industry was finally getting around to go after that with approaches like those being used by Toluna, YouGov, GMI and others. No evidence of that at this conference so far.

Now David Bakken is going to talk about sample size and whether it matters. He points out that sample size choices typically are driven by the desired precision of the estimates and the likely data collection costs. But David is a Bayesian so this will be interesting.

David has done a nice job of demonstrating how a Bayesian approach can be used to give some sense of how good an estimate is. He's done an equally good job of demonstrating how difficult it is. The thing about probability sampling is that it's easy to do and there are lots of readily-available tools to help you do it. Not so with Bayesian methods. But the ideas are really intriguing and I would hope to see more people experiment with them. Because what we are doing now, which is mostly pretending that our non-probability samples are just like probability samples, is not just wrong, it's not working.

The final presentation in this session is by Melanie Courtright and Annie Pettit. They're going to talk about scales. Risky stuff. But they've conducted an experiment to test various kinds of scales across samples from 10 countries. As it turns out their goal is to understand how response styles (ERS, ARS, MRS, etc.) play out across different scale designs and cultures. It turns about that we age we drift toward more extreme response styles. Of course, says the 65 year blogger, we are wise and know more than younger people!

They spend a little time testing some hypotheses from Hofstede having to do individualism and masculinity. Nothing there. Slight differences in response styles as scales get longer. When they looked at reliability they found no real differences across 8 batteries on different topics and found no significant differences. They found some of the classic differences in response styles by country.

I've been doing this on the fly, so pardon the escalation in typos.

More later.