Now there is a question that we ought to ask more often. At last week's MRMW Conference we heard about a whole lot of different methods—some mobile and some online—for which their evangelists made great claims. And those claims generally focused on the great insights they can deliver about some generic ill-defined group, typically "people" or "consumers." It's not always clear about which people or which consumers we're learning about. Does it mean everybody or just somebody? If somebody, exactly who?
So around the end of the second day after still another presentation describing an approach to online research that delivers incredibly accurate results for or an equally incredibly low price without all that messy sampling and weighting I asked the question: Why does it work? After a pause I got my answer: "Because we are very careful."
Say what you will about probability sampling but you have to admit that is has a theory underneath it, some scientific underpinnings and lots of empirical research to make a strong argument for why it works. Violate its key assumptions through incomplete coverage or high nonresponse and it may not work so well. The New MR crowd has the arguments more or less down pat (although a fair number of them seem not to have gotten the memo about sampling mobile phones now being SOP).
But If we are doomed to non-probability sampling methods then let's at least hold them to the same standard. Let's ask why it works. Let's ask how we are to know that the methods used to collect the data and analyze it lead to a reasonably accurate measure of the attitudes and the behaviors of the people the research claims to represent.
Let's start having those conversations. "It just works" should not be enough. That crosses the line from science to faith, and we all know what Mark Twain said about faith.