Charting a course through the COVID-19 Pandemic

Two interesting pieces of research out this morning about the impact of the COVID-19 pandemic on research results.

The first comes from Zappi and an online test of 26 concepts and advertisements across six consumer categories (personal care, food and beverage, home hygiene, Telco, QSR, OTC) in five markets (US, UK, China, Italy and Mexico). All 26 were previously tested prior to March 1 and then retested March 17-18.  Zappi’s analysis found few meaningful differences across 78 comparisons.  My gut reaction: “How can this be?”

There also is a report just released from the Pew Research Center here in the US. It was conducted for a week stretching from March 10 to March 16 and asked about the perceived threat of COVID-19 to the US economy, the overall health of the US population, daily life in their communities, their personal financial situation, and their personal health. While 70% reported seeing the pandemic as a major threat to the US economy, only about a third saw it as a major threat to their personal financial situation and 27% as a major threat to their health. Another 23% saw COVID-19 as presenting no threat at all to their personal health. The obvious message would seem to be that only a small minority of the US population saw COVID-19 as a major threat to them personally. How can this be?

It might be partially explained by the changes in response over the survey period where four of the five measures rose significantly. For example, 29% of respondents who completed the Pew survey in the first two days (March 10-11) felt that the virus posed a major threat to their personal finances. By the last two days (March 16-17) that measure rose 11 points to 40%. On the other hand, respondent concerns about their personal health remained fairly consistent, although the alarm bells didn’t start ringing loudly here in the US until the week of March 16.

These are two different studies with different samples covering different time periods and there may be multiple ways to reconcile them. I’m not going to try to do that. However, I would note that it's of critical importance to have multiple measures and perspectives if we are to really know what is happening. I also want to reinforce the message that it's more important than ever to continue to do research even as the pandemic continues to unfold and hopefully subsides. The more multiple points of measurement you can bring to bear the better. Business, governments, non-profits—any organization that relies on data to make informed decisions--needs to understand how the attitudes and behaviors of the people they serve are evolving as we work our way through this crisis. If ever knowledge were power, this is that time.

For details on the Zappi work look here. For the Pew study look here.


The cost of privacy

My email this morning included a message from Hertz describing their new fleet of BMWs. I don’t rent from Hertz anymore and the emails they continue to send are mostly reminders that my driver’s license has expired, which was some while ago. But why the BMW pitch? Europe-3220193_960_720Perhaps because in 2009 I treated myself to a BMW 3 series rental in Ireland which was great fun on the winding roads of the Dingle Peninsula and Ring of Kerry? Maybe they have a long memory? Or maybe the fact that I currently drive a BMW somehow found its way into my profile?

In any case, it reminded me of Pew’s recently released study, “Americans and Privacy.” A few relevant findings:

  • 72% of US adults believe that all or most of what they do online is tracked by companies.
  • 79% are either very or somewhat concerned about how their personal data is being used by those companies.
  • 59% say they understand very little or nothing about “what companies do with their data” and only 18% say have a great deal or some control over their data.
  • 28% say they benefit a great deal or somewhat from the data companies collect on them and 81% say “the potential risks outweigh the potential benefits.
  • 75% say there should be more “regulation of what companies can do with their customers’ personal information.”

I could go on, but I think these few examples make the point: the US public is beyond fed up with daily and routine violations of their privacy. They are especially concerned about the amount of personal information about them collected by social media companies (85%), advertisers (84%), and the companies they buy things from (80%).

The old saying, “On the Internet nobody knows you're a dog,” is no longer a thing.

The sad reality is that most, although not all of this data collection and reuse is legal, at least in the US, and that’s not likely to change anytime soon. One frequently cited reason for not taking privacy and personal data protection more seriously is that it just costs too much.

Earlier this year the Information Technology and Innovation Foundation (ITIF) released a study, “The Costs of an Unnecessarily Stringent Federal Data Privacy Law.” By way of definition “unnecessarily stringent” means something similar to the GDPR or CCPA. And the report estimates that such a privacy regime would cost the US economy about $122 billion (sometimes they say “billion” but the tables say “million”) per year, or $483 per US adult. (By way of comparison, that’s more than 50% of what we spend on electricity every year.)  So what are those costs?

Around 10% would go to Data Protection Officers and upgraded data infrastructure,  two major areas of complaint about the GDPR. But the lion’s share, 85% of the total, would go to  two areas: Reduced Access to Data and Lower Ad Effectiveness.

In the case of the former, privacy requirements such as express consent, data minimization and purpose specification will reduce data sharing. In one of my favorite sentences in the report the authors write, “Unfortunately, opt-in requirements frame consumer choices in a way that leads to suboptimal data sharing because most users select the default option of not giving consent—for a number of irrational reasons.” So best we stop asking.

As for Lower Add Effectiveness, the report tells us that “Targeted advertising is beneficial to consumers and businesses.” Such advertising allows businesses to be more efficient and increase sales. “Consumers benefit by gaining more utility from relevant ads.” More utility?

Sadly, I hear similar arguments from within market research in the form of complaining about the cost of compliance as GDPR goes global.

One of my favorite lines in the old CASRO Code of Conduct is this one: “Since research participants are the lifeblood of the research industry, it is essential that those approached to participate be treated with respect and sensitivity.” I worry that we’ve lost that sense of respect for those whose data we rely on, whether when collecting from them directly or harvesting their data from the cloud. Online panels have led to us thinking of respondents as a commodity and our increasing reliance on big data sources has caused us to stop thinking about them as people at all. In the privacy debate they are an abstraction in a one-sided cost benefit exercise.

There are recent surveys that show those people who are our lifeblood don’t think very highly of us these days. They don’t trust us with their data any more than they trust social media networks or advertisers and, whether rational or irrational, they are less and less inclined to cooperate with research requests. This is not a good thing, to say the least. It’s important that we figure out sooner rather than later whose side we are on.


Is Insights a Profession?

Screenshot 2019-02-04 15.18.21

Yesterday, like 100 million or so other Americans, I tuned into the Super Bowl. I was not there to see whether the Flying Elvis would beat a team that was only there because of one of the most glaring officiating errors in the history of American football. Rather, I tuned in to hear Romostradamus, the twitter-endowed name for Tony Romo, former quarterback turned color commentator who has become a phenomenon based on his ability to analyze a football game, explain strategy to the audience, and predict what play will be run with uncanny accuracy.

Writing in the New Yorker, Zach Helfand describes how Romo learned to do it: “In the course of his career, he watched hundreds of plays from the bench, lived through thousands more on the field, and then relived them many times over in the film room.” Frank Bruni writing in the Times echoes this theme of relentless study and preparation even to this day. In short, Romo approaches his job as a professional.

The Oxford Dictionary defines a profession as “a paid occupation, especially one that involves prolonged training and a formal qualification.” How many of us who fancy ourselves as “insight professionals” meet that test?

As I have said in other contexts, market researchers seem to take an almost perverse pride in describing how they “fell into” MR with little or no relevant education or training.  Over the last decade in particular we have celebrated the inclusion of specialists in other fields—computer science, anthropology, social psychology and journalism, to name a few. As one well known practitioner (whose name I withhold to protect his innocence) once observed, “We have allowed this industry to be taken over by venture capitalists and technology geeks.”

All this would be fine were there a concerted effort to teach these new entrants the foundational principles that underpin good research and actionable insights. What we have some learning by doing, some basic orientation in how a single company does what it does.

In December I ended a four-year stint as Executive Director of the Market Research Institute International (MRII), a non-profit that develops online courses in market research in partnership with the University of Georgia. When I first joined I was astonished by the widespread disinterest in education and training by individual practitioners and their employers, on both the client and supplier sides. And it’s not just me. This unfortunate reality is documented in a recent NewMR survey in which 39% of respondents reported receiving less than six hours of training per year. This in an industry that fancies itself as undergoing rapid and dramatic change, but it’s doing precious little to address it in a meaningful way.

Tony Romo can do what he does because of relentless and ongoing study that continues to this day. That might be more than those of us who aspire to be insight professionals can or want to commit to, but continuing to do what we have been doing is not going to produce a future where MR can play the essential role it seeks to play.


Finding the soul of research

I stole the title of this post from Simon Chadwick's editorial in the November/December issue of Research World. It reminded me that I, like many young people, began my career as something of an idealist. My first two jobs were with nonprofits and then in 1984 I joined NORC at the University of Chicago, whose tagline was and still is, "Social Science Research in the Public Interest." I spent 11 years of my life there and learned an enormous amount before moving to what I still self-mockingly refer to "the dark side" in 1995.

I was reminded of this while reading Simon's editorial and was especially struck by this sentence:

Polls are where research loses its soul; commercial MR is where it forgets it has one; and social research is where we find it.

Well said, Simon.


Big Data: Part 3

This post is the third and last, at least for now, on my series about MR’s struggles with big data. Its theme is simple: big data is hard.

For starters, the quality of the data is not what we are accustomed to. More often than not the data were collected for some other purpose and the attention paid to the accuracy of individual items, their overall completeness, their consistency over time, their full documentation, and even their meaning pose serious challenges to their reuse. Readers familiar with the Total Survey Error (TSE) model will recognize that big data is vulnerable to all of the same deficiencies as surveys—gaps in coverage, missing data, poor measurement, etc. The key difference is that survey researchers, at least in theory, design and control the data making process in a way that users of big data do not. For users of big data the first step is data munging, often a long and very difficult process with uncertain results.

Then there is the technology. We all have heard about the transition from a world where data are scarce and expensive to one where they are plentiful and cheap, but the reality is that taking big data seriously requires a significant investment in people and technology. There is more to big data than hiring a data scientist. The AAPOR Report on Big Data has a nice summary of the skills and technology needed to do big data. While the report does not put a price tag on the investment, it likely is well beyond what all but the very large market research companies can afford.

Much of the value of big data lies in the potential to merge multiple data sets together (e.g. customer transaction data with social media data or Internet of Things data), but that, too, can be an expensive and difficult process. The heart of this merging process are bits of computer code called ETLs that specify what data are extracted from the source databases, how they are edited and transformed for consistency, and then merged to the output database, typically some type of data warehouse. Take a moment and consider the difficulty of specifying all of those rules. Bigdatamiracle

If you have ever written editing specs for a survey dataset then you have some inkling of the difficulty. Now consider that in a data merge from multiple sources you can have the same variable with different coding; the same variable name used to measure different things; differing rules for determining when a item is legitimately missing and when it is not; detailed rules for matching a record from one data source with a record from another; different entities (customers, products, stores, GPS coordinates, tweets) that need to resolved; and so on. This is difficult, tedious, unglamorous, and error-prone work. Get it wrong, and you have a mess.

To sum up this and the previous two posts, I worry that big data is a much bigger deal than most of us realize. We may fancy ourselves as pioneering in this space but it’s not clear to me that we understand just how hard this is going to be. For all of the talk about paradigm shifts and disruption, this is the real deal if for no other reason than it is the right methodology (if the word applies here) to support the other big disruption, a shift away from focusing on attitudes and opinions to focusing on behavior.

Back in 2013 I put up a post about a big data event put on by ESOMAR in which John Deighton from Harvard gave a talk that was the most compelling description I had heard of the threat of big data to traditional MR. The reaction in the room struck me as more whistling by the graveyard than taking the challenge for what it is. Two years later things don’t feel that much different. We are still in a state of denial. We had better get cracking.