The results of the 2016 US Presidential elections caught a huge number of pundits and pollsters by surprise. All the major poll-based forecasts, a lot of prediction models, the otherwise very precise prediction markets, even the superforecaster crowd all got it wrong.

Our prediction survey, however, was spot on!

We called all the key swing states correctly (including Pennsylvania which no single pundit called correctly):

  • Pennsylvania: 48.2% to Trump vs. 46.7 for Clinton
    (actual result: 49.1 to 47.7)
  • Florida: 49.9% to Trump vs. 47.3% to Clinton
    (actual result: 48.8. to 47.6.)
  • North Carolina: 51% to Trump vs. 43.5% for Clinton
    (actual result: 50.5 to 46.7)
  • Ohio: 46.1% to Trump vs. 43.9% for Clinton
    (actual result: 51.7 to 43.5)
  • Virginia: 46.2% to Trump vs. 48.5% for Clinton
    (actual result: 44.4 to 49.7)
  • Iowa: 56.5% to Trump vs. 39.5% for Clinton
    (actual result: 51.1 to 41.7)
  • Colorado: 44.8% to Trump vs. 51% for Clinton
    (actual result: 43.3 to 48.2)

Overall for each swing state, on average, our method was correct within a single percentage point margin.

Read the full prediction here.

The only misses of our method were Michigan, where it gave Clinton a 0.5 point lead, and New Hampshire where it gave Trump a 1 point lead (for Wisconsin we didn’t have enough survey respondents to make our own prediction so we had to use the average of polls instead). Every other state, although close, we called right.

Our model even gave Clinton a higher chance to win the overall vote share than the electoral vote, which also proved to be correct.

Why did we get it so right when others got it so wrong?

Read more about it here.

The graph shows the calibration of our model: difference between predictions (y-axis) and the actual results (x-axis) for our method (blue dots) and the polling average (orange dots). A good prediction should be close to having a slope of 1, which is exactly what our method proved to be (a slope of 1.1). The polling averages on the other hand experienced a flatter slope of 0.77 which confirms a systematic underestimation of Trump even in states which Clinton easily won.

We have also correctly predicted the outcomes of the 2016 Brexit referendum, and the 2017 French presidential election.

  • Our best method estimated 51.3% FOR LEAVE
    (the actual outcome was 51.9%)