Whither mobile?

I spent two days last week in Chicago at Market Research in the Mobile World (MRMW). I now have been to all four of the North American versions of this event, all the way back to Atlanta in 2011. Last week’s iteration caused me to go back and look at my post about the Atlanta event as a way to gauge how much things may or may not have changed.

One obvious change was the sorting out of marketing applications from genuine research. In Atlanta we heard from too many companies whose business was collecting personal data for direct marketing, and not always transparently. As far as I could tell, the presenters in Chicago were primarily focused on research. A second change was the emphasis on case studies. Atlanta was mostly about the potential for mobile—what could be—while Chicago was mostly actual studies completed.

Atlanta also was heavy on hype and there was plenty of that over the two days in Chicago, including a fair amount on the potential of wearbles. But it was hard to get too worked up about any of it given the sobering start.

  • The first presentation was a paen to mobile as “the most important marketing channel, ever” that included the claim that 55% of people really like targeted ads, a figure undermined by virtually every credible survey on online privacy.
  • That was followed by an update on last year’s eye-popping announcement by General Mills that they hoped to be doing 80% of their research via mobile in 2014. No exact figure was given but it was clear that it turned out to be harder than they thought. They have dismantled their mobile team,  but they keep on plugging.
  • The segment was closed out with a panel of industry heavyweights on the topic, “Investments in (mobile) MR—where are they going and why?” The answer: they are not. The money is all going to big data analytics.

That was followed by about 30 presentations with an overwhelming emphasis on pure mobile applications: in-the-moment, geolocation, and mobile ethnography (as opposed to the unintended mobiles that comprise the bulk of mobile MR right now). Some were genuinely interesting and others seemed like sales pitches. The highlights and lowlights in no particular order:

  • There were a couple of nice papers on the topic of integrating mobile with other platforms as a way to understand the context in which people view different ads.
  • I heard no discussion of the sampling challenges beyond a preliminary report from TNS aimed at allaying concerns about bias in online surveys that include respondents using mobile devices. One might have gotten the impression from most presentations that there is 100% smartphone penetration and people are willing to use them to do pretty much anything researchers ask them to do.
  • The potential power of mobile ethnography was nicely demonstrated by several presentations.
  • There was a somewhat bizarre though seemingly heartfelt epilogue to one presentation pleading with us not to be swayed by the media into giving up on Google Glass.
  • There was an equally bizarre presentation on mobile’s ability to reduce social desirability bias and satisficing that opened with the presenter acknowledging that all he knew about the phenomena was what he learned from their Wikipedia entries. It seemed to be just another study on recall.
  • There continues to be some sloppiness around proper privacy protections, especially in mobile ethnography. There was a panel on the topic (I was on it) but less than half of the attendees were in the room.
  • A number of presentations took up the theme of respondent-driven design that emphasized better interfaces, choice of channel, and better-designed if not shorter surveys. On this latter topic, one presenter showed that a well-designed 15-minute survey is possible with no discernible data quality drop off.  No doubt music to the ears of clients (of which there were very few) and even researchers who cling to long surveys like an NRA member clings to his assault rifle.

My bottom line is that things have changed substantially since Atlanta, but nowhere near as much as mobile evangelists predicted. Mobile has become mostly an extension of online, and while there is no shortage of startups offering in-the-moment and other pure mobile solutions there also are no clear signs yet of research buyers flocking to them. Mobile has changed substantially how we interact with one another and the world around us. It must be a fundamental concern in every research design, but it has yet to truly transform MR in any meaningful way.


A little theory might help

As best I can remember it was November of 2009 when I first sat through a conference presentation setting out some basic design considerations for mobile research. I can’t imagine how many times since someone has presented essentially the same advice whether from the conference stage, in a corporate white paper, via a webinar, or whatever. Mostly these rules, guidelines, tips, etc. are based on some combination of experience and intuition. They generally lack any theoretical basis or methodological experiments to back them up. That’s for the academics. Market researchers just want to get on with it, just do mobile. It’s the same approach we took with online surveys and perhaps one of the reasons why they often are so wretched.

Having some theoretical underpinnings for mobile design strikes me as a good thing. It might provide a rationale for why we do what we do, and serve as the as the basis for hypotheses to drive empirical experiments that might actually lead to better practices. Perhaps more importantly, a good theory could be really useful when helping clients to rethink their approach to surveys in a mobile world.

All of this came to mind while reading this post by Raluca Budiu arguing that information theory has a lot to offer as we think about designing for mobile devices. In a nutshell, her argument is that designing for mobile is all about dealing with basic limitations in the human-device communication channel. Some of those limitations are technology-based, such as the bandwidth of the connection to the Internet, the processing capability of the mobile device, and the size of the screen. Other limitations have to do with the user, things like working memory capacity and the focus that a user can bring to the task given the environment. Granted, Budiu’s principal focus is on websites, not surveys, but it is even more important to offer an optimal design for surveys because the motivation generally is much poorer when completing a survey than when searching for information.

And, as Budiu points out, things are about the get a lot more complicated.

It’s clear that we’re moving towards an interconnected world populated by a plethora of devices — from smart thermostats, smartwatches and smart glasses, smart phones, phablets, tablets, laptops, desktops, smart TVs, and smart tabletops. We need a unified theory for designing for the continuum of screen sizes. This theory cannot reduce all these systems to a single denominator; designing for smartwatches is not the same as designing for tablets, and designing for mobile is not the same as designing for the desktop. Although many of the principles may be the same, they get applied differently on different devices. We need more nuance.

The challenge may be even greater for researchers than for website designers. One of our goals is ensure that the way in which we present a survey does not cause the respondent to answer differently because of the device he or she is using. Having a little theory to guide us could be a big help.

 


Pleeezz!

Today’s update from Research-live.com has this headline: Online trackers not optimised for mobile could 'compromise data quality.' It goes on to explain:

GMI, which manages more than 1,000 tracking studies, claims that online trackers that haven’t been optimised for mobile platforms may exclude this growing audience, which could lead to a drop in data quality, reduced feasibility and the possibility of missing whole sections of the required population from research.

Let me be clear. I don’t disagree that online surveys need to be optimized for mobile and that the numbers of unintentional mobile respondents (aka UMRs) is large and growing. But a warning from an online panel company that scaring away UMRs may be leading to a drop in data quality because of “the possibility of missing whole sections of the required population from research” just drips with irony.

Let’s start with the fact that online research, at least in the US, by definition is excluding the roughly 20% of the population that is not online. Research using an online panel of, say, two million active members is excluding about 99% of the adult population. As the industry has moved more and more to dynamic sourcing it’s hard now to know how big the pool of prospective online respondents is, but it’s a safe bet that that the vast majority of US adults are missing, and not at random.

Surely, if we have figured out a way to deal with the massive coverage error inherent in the online panel model, we can handle the mobile problem.

I suspect that the real issue here is feasibility, not data quality. Just as the now near-universal use of routers is about inventory management rather than improved representativeness. I wish that online panel companies would spend more time trying to deal with real data quality issues like poor coverage and inadequate sampling methods, but that’s only going to happen if their customers start demanding it.


If You Need to Know What’s Legal, You’re Already on the Losing Path

This is the second and final post from Michael Link on location.

Privacy and ethics concerns in research are not new, but they have taken on considerably higher visibility in our 24/7 news world as researchers test the bounds of new measurement approaches. At a recent symposium on Leveraging Location hosted by Nielsen, a panel of legal experts provided some thoughts in this area on issues researchers are encountering today. Their insights went well beyond location data, hitting on aspects that involve many of us working with data from the public. (Obligatory warning: I’m not a lawyer; these are simply my observations and interpretation of the discussion). Three broad lessons caught my attention:

First and foremost, start with the respondent/consumer, understanding and acting in accordance with their expectations. How data are collected and the insights generated should be readily apparent to the “average person.” If your starting point is “the law” or “what is legal”, you’ve already put yourself in a hole. Laws and regulations provide a base -- a bare minimum, what the public demands is often much more. As researchers we should operate within the reasonable expectations of the majority of the public, yet not necessarily feel constrained by folks on the extremes.

Second, to lead in innovation you cannot be afraid to have your name in the paper and receive negative comments. In essence, as one panelist put it “get comfortable not being comfortable.” Pushing the envelope involves a degree of calculated and real risk. If your organization likes to keep a low profile and acts with alarm at the first half-dozen negative emails received, then you might want to take “innovation leader” off of your long-term business strategy. Note that Rule #1 above is still in effect, so setting expectations accordingly and having a good grasp of the potential risks is imperative.

Third, time is an important and often under-appreciated dimension of attitudes towards privacy and ethics. These are not static features of our society, but rather evolving concepts. What may have been unthinkable a few years ago (using a smartphone as a “virtual wallet”) now seems commonplace. Likewise, certain aspects of data collection in this digital/organic data era that seem unreasonable to the public today (and hence, would be good to avoid) may become more readily accepted with time and incremental exposure. The trick to innovating, of course, is knowing when the time may be right. To this there is no clear right answer, but the public (and the press) will let you know if you have chosen unwisely.  


Leveraging Location: Where Are We?

My buddy, Michael Link, is Chief Behavioral Methodologist at Nielsen. A couple of weeks back he organized a symposium focused on location.  I asked him to do a guest post here and this is the first of his two reports.

Location can have a strong influence on people’s attitudes and behaviors, but what does the term really mean and how can we measure it with any degree of confidence? Location may be a stationary place or geography (i.e., home, work, store, theater) or a transitory pathway (e.g., route taken from point A to point B with stops or points of interest in between), with each providing different forms and levels of insights. Traditionally the province of recall surveys and activity diaries, mobile technologies and big data sources are opening an array of routes to our ability to measure and utilize location information. To address these and related issues, Nielsen recently hosted a symposium entitled “Measurement on the Move: Leveraging the Power of Location,” bringing together 30 research experts from telecom, digital, consumer goods, media, academia and government for two days of discourse on sources of location data, developing meaningful metrics, turning measures into insights, as well as legal and ethical challenges.

Borrowing concepts espoused by former Census Director Robert Groves, Nielsen Chief Demographer and Fellow Ken Hodges laid out the key issues by contrasting location as “design data” (collected from known populations for a specific purpose often with controlled methods) versus “organic” data (largely unstructured, massive datasets that arise from the information ecosystem with few controls). In the context of location today, design data are typically those captured via a mobile device using GPS-enabled trails and “check-in” features -- the link between the respondent and the locations is typically meaningful and durable, there is a focus on addresses/geography (fixed longitude and latitude), with researchers often seeking to freeze movement thereby capturing a moment in time. In contrast, “organic data” comes from sources such as telecom carrier records (largely cell phone tower information) where the link between people and location is often transitory and may actually have little meaning, focus is on broader spaces, proximity, and general patterns, with researchers seeking to extract understanding of movement over spans of time. Very different approaches to location, often addressing different questions and requiring different analytic techniques. Both have challenges in terms of coverage (persons and geography), location measurement specificity (variations in their measurement exactness), ease of capture, and validation of insights gained. Location isn’t a new concept, but is one being renewed by technological change, offering multifaceted opportunities as well as challenges.