Leveraging Location: Where Are We?

My buddy, Michael Link, is Chief Behavioral Methodologist at Nielsen. A couple of weeks back he organized a symposium focused on location.  I asked him to do a guest post here and this is the first of his two reports.

Location can have a strong influence on people’s attitudes and behaviors, but what does the term really mean and how can we measure it with any degree of confidence? Location may be a stationary place or geography (i.e., home, work, store, theater) or a transitory pathway (e.g., route taken from point A to point B with stops or points of interest in between), with each providing different forms and levels of insights. Traditionally the province of recall surveys and activity diaries, mobile technologies and big data sources are opening an array of routes to our ability to measure and utilize location information. To address these and related issues, Nielsen recently hosted a symposium entitled “Measurement on the Move: Leveraging the Power of Location,” bringing together 30 research experts from telecom, digital, consumer goods, media, academia and government for two days of discourse on sources of location data, developing meaningful metrics, turning measures into insights, as well as legal and ethical challenges.

Borrowing concepts espoused by former Census Director Robert Groves, Nielsen Chief Demographer and Fellow Ken Hodges laid out the key issues by contrasting location as “design data” (collected from known populations for a specific purpose often with controlled methods) versus “organic” data (largely unstructured, massive datasets that arise from the information ecosystem with few controls). In the context of location today, design data are typically those captured via a mobile device using GPS-enabled trails and “check-in” features -- the link between the respondent and the locations is typically meaningful and durable, there is a focus on addresses/geography (fixed longitude and latitude), with researchers often seeking to freeze movement thereby capturing a moment in time. In contrast, “organic data” comes from sources such as telecom carrier records (largely cell phone tower information) where the link between people and location is often transitory and may actually have little meaning, focus is on broader spaces, proximity, and general patterns, with researchers seeking to extract understanding of movement over spans of time. Very different approaches to location, often addressing different questions and requiring different analytic techniques. Both have challenges in terms of coverage (persons and geography), location measurement specificity (variations in their measurement exactness), ease of capture, and validation of insights gained. Location isn’t a new concept, but is one being renewed by technological change, offering multifaceted opportunities as well as challenges.

No surprise: It just keeps getting worse

The latest numbers on wireless only households as measured by the NHIS have just been released.  They now estimate that as of December 2008 20.2 percent of US households have only a wireless telephone.  The 2.7 percent increase in the second half of 2008 is the largest six month increase since the NHIS has been collecting these data. 

I recently saw some estimates from SNL Financial that appear to have taken the NHIS trends and projected them forward.  Their data say that by 2013 a whopping 37 percent of US households will be wireless only. 

As I have noted in the past, the most statistically robust way of dealing with this problem is to spend the extra money and dial some cell phones.  The financial hit is not too bad as long as you only have to do limited dialing.  But as the proportion of cell only Rs increases, so do the costs.  While there is always the option of mixing in wireless only Rs from an online panel that is a journey into a statistical no man's land.

New report on wireless substitution

Kate Harris has pointed out to me that CDC  just released a new report on the prevalence of wireless only households in the US, and this time they are providing state level estimates.  The variation across states is dramatic to say the least.  Oklahoma is estimated to have a whopping 26.2 percent of its households unreachable via a landline phone while Vermont is at just 5.1 percent. 

These estimates are based on 2007 survey data when the national estimate was 14.7 percent of households.  In 2008 that estimate rose to 17.5 percent.

Mobile Research Conference 09 - Day 2

I’m sure you’ve noticed that 3 weeks have elapsed and I’m only day 2. Well, a few things intervened. Though I’d sum up the rest of the conference – which ran the gamut from strong presentations to a fascinating but somewhat tangential soliloquy on response rates – in very simple terms. What did I learn you should and shouldn’t do in mobile research?


1.       Use on immediate, emotive or recall, and location-specific topics. Examples of ideal mobile survey question: How attractive is that package you just picked up at Tesco? Which advertisement do you remember from the commercial break that just ended and did you like or dislike it? Ipsos found that ad recall was more accurate in mobile vs. web surveys though verbatim were much less rich.

2.       Expect to get nearly all of your responses very quickly and if you don’t want that, stagger release of the sample.

3.       Recognize that unlike other modes, it costs respondents to participate (not just their time), even down to a per-response cost if you use SMS.

4.       Consider appropriate incentive structure that compensates for cost of participating (SMS or data plan costs) over and above the typical remuneration for time/trouble.

5.       If working with a panel provider, determine whether their mobile panel is purpose-built or simply regular panelists who opted in to mobile surveys.

6.       Use an enhanced invitation (see Ipsos paper for example): personalized, emphasizes that participation is free, states purpose of study (Total Design stuff). This resulted in higher response rate and more meaningful verbatims (though it had no effect in an email invitation to a web survey run in parallel, and a subsequent GlobalPark study contradicted the findings on rich verbatim).

7.       Think about how a mobile survey could integrate or converge with a social network so that you can combine a conversation or co-creativity exercise (network) with brief, point-in-measures (mobile survey).

Should not

1.       Use gratuitously. There is no firm evidence that mobile respondents will be more engaged than in other modes/panel types, or that overall data quality or time-to-insights are improved. Examples of really successful implementation are few and far between.

2.       Expect to get older people or broadly representative consumer samples out of mobile panels. That being said, every mobile panel vendor said his or her panel is underutilized at this point so there is room for experimentation.

3.       Give people mobile survey as one among many survey mode options. It is fundamentally different from other modes, confuses people and lowers response rate to have too many options.

4.       Field SMS surveys (as opposed to web-based mobile surveys) if you can avoid them, though the invitation trigger may be an SMS. They are easier and reach a broader base of respondents, but trigger major concerns about cost and are tough to integrate on the back-end since each response*respondent combination is a separate record.

5.       Tell people it won’t cost them to participate once and expect they’ll remember that. In experiments and field results shown, cost to participate was a major concern no matter how often respondents were reminded that it was free. In the Ipsos study, even though SMS was free, half of those dissatisfied with the experience focused on cost.

6.       Expect response rates from mobile panels to be significantly higher than from “normal” internet access panels. Cited response rates were mostly in the 10% to 15% range.

Mobile Research Conference 09

Reg has kindly allowed me to act as a guest blogger. I'm in London at the (first?) Mobile Research Conference, put together by Global Park. About 75 people in attendance, mostly suppliers and academics. Today was the first day and we had 6 sessions ranging from broad keynotes to case studies of specific research projects conducted on mobile platforms. My broad observations at this point are more about what isn't being said than what is:

  1. A lot of emphasis is placed on the advantages of mobile platforms as "personal" in a way that web surveys on computers are not. Not honestly sure I understand this. I understand that people have a different relationship with their phones than with their PCs, and that PCs are often shared, but that doesn't mean computers aren't personal, and in any case I'm not sure what practical advantage a "personal" device has for research. I certainly see some of the practical disadvantages – principally that if a research invitation is greeted as spam, it's likely to raise more ire on a phone than in email on a computer.
  2. Research using SMS and web surveys on a phone browser are the first modes I'm aware of that have cost implications to participants (other than the value of their time). Six questions posed by SMS can cost the participant upwards of $2. In countries with high rates of pay-as-you-go mobile plans, the cost can be quite high. This has clear implications for how we provide incentives, but I suspect it has other implications as well, which haven't been thought through well.
  3. There is a huge need for sample sources. There is no frame of mobile phone numbers and no real equivalent of internet access panels as a driver of mobile phone research growth yet. The panels that exist are subset from existing internet access panels so its coverage error compounded with coverage error. The panel providers who spoke typically saw 30% uptake among existing panels to participate in a mobile panel, but surprisingly the common remark is that their mobile panels are underused.
  4. Questionnaire length limitations are and probably always will be a feature or limitation of this mode. We need to seek out more creative ways to use the mode beyond just asking questions. Sweetening responses with other forms of data (location services, photos) helps but it also turns the mode into a vehicle for discussions that are semi-qualitative in nature – very cool but not fitting very well into the model for efficient research that the industry relies on now.
  5. Every presentation focused inordinately on how quickly people respond. It reminds me of the early panel days when speed was touted as an end rather than a means to an end. I am interested in compressing time to results, but with the exception of certain polling and media research context, I don't think that getting responses in 2 hours rather than 2 days really addresses a client need.

Apologies for any typos. I'm trying to embrace the casual nature of the medium. Those interested in going a step further can consult the conference's tweets at \mrc09.