The three major political pollsters all got it badly wrong on May 18. Newspoll’s election-eve poll predicted a two-party-preferred (2PP) result of 51.5%-48.5% to Labor, almost exactly the reverse of the actual result, with the Coalition’s primary vote more than three points higher than polling suggested.

Galaxy Research, also controlled by YouGov, had Labor winning 51%-49% and underestimated the Coalition vote by two points. Ipsos also predicted 51%-49% in its last poll. Essential reported 51.5%-48.5% in its final poll. Days earlier, veteran polling outfit Roy Morgan also weighed in, predicting 52%-48%.

To be fair, the reason the 2019 failures stand out so much is that Australian pollsters have been enviably accurate in the past compared to the UK and the US — right up to 2016. But as ANU Vice-Chancellor and Nobel Prize-winning astrophysicist Professor Brian Schmidt, who knows a thing or two about numbers, pointed out the day after the election, the confluence of polling results in the 2019 election was improbable — highly improbable.

Psephological site Mark The Ballot made a similar warning, presciently, before the election. But as former Essential pollster Andrew Bunn pointed out to INQ, what made it really weird is that the same wrong result was achieved by pollsters using very different methodologies. Newspoll, owned by British company YouGov via its ownership of Galaxy Research, is a combination of robocalling and online polling. Essential uses online polling, while Ipsos, which appeared in Nine-Fairfax papers, used phones. But all got it wrong.

A review of the 2019 election polling failure is now being conducted for the Association of Market and Social Research Organisations by an array of industry luminaries. 

So, what happened?

A number of explanations and theories have been advanced to explain what happened in May — some by the pollsters themselves, others by those prepared to be more sceptical about their methods:

  • A late swing to the government. This explanation is easily dismissed — the swing must have been so late it happened after the close of voting, as Galaxy had an exit poll on election day showing victory to Labor by 52%-48%. But it’s worth noting pre-poll votes heavily favoured the government — the result of votes cast on May 18 itself were much closer to what the polls suggested. Galaxy’s exit poll may have been the most accurate of the campaign, but it missed the early swing to the government — in the third of voters who voted before May 18.
  • Shy Tories. As the name suggests, a British phenomenon, coined in the 1980s when Margaret Thatcher (and, later, John Major) racked up wins despite apparent lack of support. The problem, as Bunn noted to INQ, is that for some reason they’re shy in some elections but pretty unabashed in others — no one underestimated Tony Abbott’s support in his 2013 landslide win.
  • Herding. There’s been a lot of discussion about whether pollsters, worried about being out of step with other polls and especially the most influential, Newspoll, adjusted their weightings. Pollsters have a huge range of weighting methods, which adjust the raw data to better reflect the demographic features of the actual population compared to the sample, to choose from — and can thus use them to bring their results more into line with what they were seeing elsewhere. This might explain why the same problem occurred across polls with very different methodologies. William Bowe warned Crikey readers a week before the election to beware of this exact possibility given the strange uniformity of the polls. But as Tasmanian psephologist Kevin Bonham points out, there’s no actual evidence that this happened.
  • Preferences and don’t knows. Pollsters have to allocate the preferences of those who indicate a voting intention for a minor party, and use historical allocations as a basis. But what if those allocations no longer apply? The LNP collected a significantly greater proportion of One Nation preferences in 2019 than in 2016. And results from respondents who answer Don’t Know are traditionally allocated using the results of the overall sample. But what if, due to the nature of the campaign, the most disengaged and least informed voters broke persistently for one side rather than being evenly distributed?

Others have approached the problem from a different perspective — particularly Bonham, Mark the Ballot, and Simon Jackman and Luke Mansillo of the University of Sydney. All have tried to mathematically adjust polling data over the last parliamentary term to reflect what we do know about what voters actually felt: we have the election results from 2016 and 2019, which tell us that the polls were very accurate in 2016 but went badly off-beam at some point before the 2019 election.

Bill and Chloe Shorten tuck into a ‘democracy sausage’ during the election campaign. Image: twitter.

The question thus becomes how and when they went off-track. The answer — which requires an excellent understanding of maths — depends on the assumptions you make, however Jackman and Mansillo suggest the Coalition vote began recovering from late 2018, but the recovery wasn’t picked up by the polls. Mark the Ballot‘s models suggest either the Coalition’s vote was understated by polling throughout the parliamentary term from 2016-19 but the problem got worse as the election neared, or it enjoyed a late swing in the lead-up to the election that was missed by pollsters. Bonham, who has an excellent analysis of the different approaches, advances the hypothesis that Scott Morrison’s arrival as Prime Minister in the wake of the ousting of Malcolm Turnbull was the catalyst for an improvement in the Coalition vote that the polls failed to fully pick up.

While there’s no hard evidence for Bonham’s hypothesis, it would explain why Labor did well in the Super Saturday byelections last year against Turnbull — especially in Queensland, where it comfortably retained Longman, and Tasmania, where it retained Braddon — seats they easily lost in May against a different Prime Minister. Remember that then-LNP president Gary Spence urged the Coalition to ditch Malcolm Turnbull last year because of his lack of appeal in the sunshine state.

Prime Minister Scott Morrison and his family on election night in May. Image: twitter.

But there are so many variables with this scenario — and knowing when the polls went off track doesn’t explain why. As Jackman pointed out to INQ, the failure “could have been driven by any one of real changes in public opinion, biases in polls, or changes in poll biases — perhaps it was when YouGov bought Galaxy?”

Jackman — who is now working with Mansillo on further work on the election for the ANU — thinks herding is a serious issue in the polling community but it’s almost impossible to verify unless the pollsters themselves reveal it. Bunn thinks that, despite the Association of Market and Social Research Organisations (AMSRO) inquiry currently underway, we’ll never find out exactly why all the polls were off — because pollsters themselves might not know.

If so, that leaves those who rely heavily on polling in their professional lives with hard choices.