Leveraging Location: Where Are We?

My buddy, Michael Link, is Chief Behavioral Methodologist at Nielsen. A couple of weeks back he organized a symposium focused on location.  I asked him to do a guest post here and this is the first of his two reports.

Location can have a strong influence on people’s attitudes and behaviors, but what does the term really mean and how can we measure it with any degree of confidence? Location may be a stationary place or geography (i.e., home, work, store, theater) or a transitory pathway (e.g., route taken from point A to point B with stops or points of interest in between), with each providing different forms and levels of insights. Traditionally the province of recall surveys and activity diaries, mobile technologies and big data sources are opening an array of routes to our ability to measure and utilize location information. To address these and related issues, Nielsen recently hosted a symposium entitled “Measurement on the Move: Leveraging the Power of Location,” bringing together 30 research experts from telecom, digital, consumer goods, media, academia and government for two days of discourse on sources of location data, developing meaningful metrics, turning measures into insights, as well as legal and ethical challenges.

Borrowing concepts espoused by former Census Director Robert Groves, Nielsen Chief Demographer and Fellow Ken Hodges laid out the key issues by contrasting location as “design data” (collected from known populations for a specific purpose often with controlled methods) versus “organic” data (largely unstructured, massive datasets that arise from the information ecosystem with few controls). In the context of location today, design data are typically those captured via a mobile device using GPS-enabled trails and “check-in” features -- the link between the respondent and the locations is typically meaningful and durable, there is a focus on addresses/geography (fixed longitude and latitude), with researchers often seeking to freeze movement thereby capturing a moment in time. In contrast, “organic data” comes from sources such as telecom carrier records (largely cell phone tower information) where the link between people and location is often transitory and may actually have little meaning, focus is on broader spaces, proximity, and general patterns, with researchers seeking to extract understanding of movement over spans of time. Very different approaches to location, often addressing different questions and requiring different analytic techniques. Both have challenges in terms of coverage (persons and geography), location measurement specificity (variations in their measurement exactness), ease of capture, and validation of insights gained. Location isn’t a new concept, but is one being renewed by technological change, offering multifaceted opportunities as well as challenges.


Not much new with mobile

A few weeks back I attended the Market Research in the Mobile World conference in Minneapolis.  This was the third in a series of North American conferences (there are events in Europe and Asia as well) that styles it self as “the original, premier event for the Mobile Market Research Industry.”  I had been to the first event in Atlanta in 2011 as well as last year’s event in Cincinnati.  I was in Minneapolis in part to represent ESOMAR on a panel, but also to find out what’s new with mobile.  As it turns out, not much.

The style of this conference has always been somewhat frenetic.  By my count there were 31 presentations over the two days.  Presenters zipped on and off the stage without much time to think about what was said or connect the dots to draw a larger picture.  And, as one would expect, many of the presenters were from new entrants who view the industry solely through the lens of mobile.  So it was like assembling the elephant one piece at a time. And, of course, there were the sales pitches from the podium, most of which we have all heard countless times before.  One good thing: the organizers have filtered out what Mark Michelson likes to call “the cowboys,” meaning companies organized principally to collect PII and sell mobile advertising rather than do research.

One of the more telling moments of this conference was Larry Gold’s ask for help in developing a better method for estimate mobile market research revenues.  Larry is editor of Inside Research and is currently pegging the US spend on mobile at around $42M, a drop in the bucket for an $8B industry.  He knows he’s not capturing what he calls “multimode,” meaning the portion of online surveys where respondents choose on their own to complete by mobile, people we increasingly refer to as “unintended mobile respondents.”  But he also knows that he is missing all of the firms who only do mobile.  This seemed like the obvious audience to help him out.  Any ideas?  Silence.  What do you think the current spend really is?  A few guesses ranging from $1M to $400M.  I asked him the next day whether anyone had approached him with ideas.  Nada.

Far and away the two best presentations of the conference were the bookends.  Jeanine Bassett from General Mills (just down the road from the conference venue) opened the conference by describing her company’s migration to mobile, that first concrete evidence I have seen of a major client making an all-out commitment.  She embraced a target of having 80% of their research on mobile by the end of 2014.  Dan Foreman of Lumi Mobile closed the conference with a vision of the future of mobile that tried to pull all of the strands together in a way that described not just mobile, but a transformed industry.

Right now it is slow going.  That’s something of a mystery given the penetration levels, not just of mobile phones but also of the mobile web. I have some thoughts on that but I’m saving them for the AMSRS Conference next month in Sydney.


Is research on research the real deal?

David Carr had a piece in last Sunday's New York Times about the difficulty of distinguishing journalism from activism. His first sentence sums up the issue pretty succinctly, "In a refracted media world where information comes from everywhere, the line between two 'isms' — journalism and activism — is becoming difficult to discern." His case in point is Glenn Greenwald, the Guardian reporter who has been breaking all the Snowden stories. Greenwald seems to be a guy with a very strong suspicion of government, and it shows in what he covers and how he writes about it. Cable news provides still better examples, where people who describe themselves as journalists routinely put a political agenda ahead of the objectivity that some of us expect (hope) from "the news."

We have a similar problem in market research when it comes to distinguishing between good methodological research and what we like to call research on research (RoR). Good methodological research is based in honest and objective scientific inquiry. Hypotheses are formed, the relevant literature reviewed, experiments designed and executed, data analyzed, hypotheses accepted or rejected, conclusions reached, and potential weaknesses in the research fully disclosed.  The best of these studies end up in peer-reviewed journals where they help us to build and refine research methods, brick by brick.

Much of RoR, on the other hand, has become something much different. It often features a point of view rather than a hypothesis, and the exploration of the data is a search for proof points ratBadscienceher an objective analysis aimed at uncovering what the data can tell us. The end product typically is a white paper, designed to sell rather than inform. We might attribute some of the poor quality of RoR to a lack of training and skill, but I expect most of it comes back to the simple fact that MR is a business. Academics achieve success by doing good solid research that earns the respect of their peers. MR companies succeed by selling more of their stuff.

All of which is not to say that there is not some good RoR being done, studies that are based in the fundamentals of objective scientific inquiry. It's just that it's getting harder and harder to tell the difference. And given the methodological disruption that has come to characterize our industry over the last decade, that's a real problem for all of us.


Ending Day 1 at 3D by predicting the future

The last session of the first day focused on social media. Edward Malthouse presented on the link between social media and purchase behavior. He presented some cool stuff on attempts by brands to create competitions as a way to engage (How does Superman shave? Design a new McDonald's sandwich, etc.) His data is from the Air Canada frequent flyer program. The program wanted to learn more about how to get members to redeem their miles. So they asked members to go to the site and suggest rewards that would encourage them to redeem their miles. His approach is based in System 1 and System 2 thinking. Promotions that use incentives typically rely on System 1 (habitual responding). Promotions that try to engage are directed at System 2 (effortful processing). His hypothesis is that the more a promotion engages System 2 the more likely someone will purchase. He measured engagement by the number of words and the degree of elaboration in the suggestions that they made at the site. He offered a whole bunch of hypotheses but the key one is that engaging System 2 increases the odds of purchase. His data generally supported that view.

We ended with an interactive session designed to have the audience answer three questions about the MR industry in 2023. Tables in the room were divided into three groups with each group asked to answer one question. So the unit of analysis below is the table.

  • Sixty six percent said that the basic source data for insight will be 50% digital and 50% non-digital. The remainder opted for 80% digital and 20% non-digital. This was a 3D conference after a day of hearing presentations on Big Data.
  • Sixty percent said that 30% of insights will come from clients and 70% from agencies. The remainder said 70% from clients and 30% from agencies. Blogging comrade Jeffrey Henning sees this as sample bias.
  • Seventy percent said that 80% of decisions will be made by people and 20% by machines.

A good day. Now for some drinks!