Once again the second biggest story coming out of a US presidential election is what one commentator described as "the absolutely massive failure of political polling." Really?
Consider this. Nate Silver, relying mostly on surveys, predicted that Joe Biden would win 30 states and that the tipping point state would be Pennsylvania. He forecasted the winner in all but two of those 30: Florida and North Carolina. That’s a 93% hit rate. His projection of the final vote for Florida was off by 2.8 points and just 1.3 points in North Carolina. Taken as a whole and by pretty much any objective measure, that’s not bad. And whether we and they like it or not, the public along with those of us who work in the insights field are going to have to accept it as the best we can do.
Surveys have become a blunt instrument from which we unreasonably, in my view, expect to get precise measurements. Perhaps that expectation was more reasonable when we were able to generate probability samples from frames with near complete coverage and achieve very high response rates. Even then, our reliance on sampling meant there would be some error in our estimates, and that simultaneously drawing repeated samples from that same frame and achieving similar response rates would still not deliver the exact same estimates every time.
But those days are gone. The key pillars of scientific survey research –a probability sample from a frame with high coverage of the target population and a high response rate – have all but disappeared and with them went our capacity to deliver results that at least come close to meeting the expectations of the media and the public at large. That’s not the fault of political pollsters, but of decades of social and technology-driven change that have dramatically altered what is possible. Although to be fair, there are those who oversell what they do and why it works better than the rest.
So instead we have hodgepodge of methods.Those methods run the gamut from science to snake oil. Fortunately, most still work reasonably well. They work in part because they have an advantage that political pollsters enjoy and the rest of us in research don’t; they eventually know the right answer and therefore have a leg up on figuring out what they got wrong and how to fix it for the next round.
Most of what I see written about these massive failures are by the same people who, lacking any grasp of the basics of electoral polling, nonetheless obsess on them prior to the election, and then have a need to find blame when they see the results. Or perhaps they long for the good old days when old white guys acting as savants would sit around predicting outcomes based on more reliable methods, such as counting yard signs. That aside, the big question may well be not how can surveys improve but how the polling industry can thrive in a world where simply forecasting winners and losers is not enough, that we also must precisely say by how much. This is a game of expectations, and right now political pollsters are losing it. And just maybe, that's their fault.