The shock result in the 2020 election has left the political world desperate for answers and eager to blame. How could Democrats have come so close to losing the presidency against such an historically unpopular candidate? How could they have lost the Senate? How could they have almost lost the House?
Among the objects of wrath is polling and the polling industry – and not without reason. In 2020, polls performed poorly where it counted. While they might have been on the correct side of the binary (e.g. saying Biden won) they were way off in degree. But not every predictive tool is the same, and they did not all perform in the same way. Prediction markets, which achieved some degree of notoriety in 2016, ended up way closer to the mark than polling or prediction models. Consistently leading up to the election, betting markets gave Trump a chance north of 40% of winning the election. At a time when Nate Silver was saying Trump had an 11% shot at taking home the requisite electoral votes, PredictIt was giving him 43%. Markets saw something that others did not – namely, the inability of the polling industry to accurately measure the Trump demographic. Let’s look at some examples.
The RealClearPolitics polling average Biden a lead of 7.2% over Trump the day before the election. Some polls in that average, such as Quinnipiac’s, gave Biden a lead of 11%. As of Thursday evening, Biden has a lead over Trump of 2.7%, less than half what the polling average predicted. Prediction markets only gave Biden a 12% chance of leading by over 6%. Averaging out the numbers, bettors thought that Biden would end up in the range of a 1-5% lead in the national popular vote, which is precisely what happened.
The RealClearPolitics polling average gave Biden a lead of 4.2% over Trump in Michigan the day before the election. Three polls included in that (from CNCB, Emerson, and Mitchell Research) gave Biden a lead of 7%. The day before the election, PredictIt gave Trump a 33% chance of winning the state. For context, elections are generally decided by small margins. Polls showing a candidate winning a state by 7% would, obviously, an easy win. Bettors did not see it that way – they thought that Trump had a one-in-three chance of carrying the state. The eventual outcome was a Biden lead of 2.7% as of Thursday evening, much closer than polling indicated.
This is where things get disastrous for the polling industry. In the RCP average, Biden had an average lead over Trump in Florida by 0.9%. Some polls in that average had Biden up by as much as 5%. PredictIt bettors were on the other side of that analysis, giving Trump a 60% chance of winning Florida. The eventual outcome was in their favor, with Trump winning the state by 3.4%. Florida is particularly emblematic of the systemic errors in polling, but PredictIt always had the state going in Trump’s favor.
Pollsters had Biden up by 6.7% on average in Wisconsin, including several polls that had Biden up by double digits. As of this writing, Biden leads in Wisconsin by a paltry 0.7%. That’s a 6% polling error, over twice the standard margin of error. Polling projected that Wisconsin was a lock for Biden, and an easy one at that. PredictIt gave Trump a 1-in-3 chance of winning, much closer than one would have guessed based purely on polling.
The RealClearPolitics average had Trump up by only 1% in Ohio. A poll for Quinnipiac event had Biden up by 4%. Polls indicated the state would be a toss-up, but PredictIt gave Trump a 72% chance of winning. In the end, the markets were vindicated, as Trump won the state by 8.2%. That is a polling error of over 7%.
We could keep going through these examples, including the disastrous performance of Democrats in the Senate and the House. Leaving that aside, one of the takeaways from this election ought to be that pollsters have routinely failed to capture the Trump coalition, but prediction markets haven’t. Maybe in the future we should pay less attention to the former and more attention to the latter.