Tony Abbott

Nearly three months after the September 7 election, the Australian Electoral Commission finally published completed results late last week, furnishing election wonks with such arcana as minor party preference flows and Labor-versus-Coalition two-party preferred results in the 11 seats where one or the other party failed to make the final count.

Among other things, the data at last makes it possible to comprehensively evaluate the performance of the opinion pollsters, a subject that generated more than its usual share of controversy during the campaign. What emerges is a polarised picture of strong performance in gauging national voting intention, but a clear pattern of Coalition bias in electorate-level and other localised polling.

First the good news: when final pre-election polls are compared with the official result, the performance of the various pollsters ranges from adequate to outstanding. This is illustrated in the table below, which shows two measures of performance with respect to both two-party preferred and the primary vote for Labor, Coalition and the Greens.

The first of these, the “error”, is the straightforward difference between the opinion poll and election results. The second seeks to express a given result’s “accuracy” as a percentage mark, in which 100% represents an exactly correct result, 50% indicates the degree of accuracy a poll should theoretically achieve at least half the time, and less than 5% means a result outside the margin of error.

For the sake of a level playing field, the margin of error is presumed to be the 2.6% that applies to a poll with 1400 respondents (though in some cases it was a good deal more than that).

At first glance the outstanding result might seem to be Morgan’s bullseye on two-party preferred. However, two-party preferred projections based on the result of the previous election were, for the first time since the rise and fall of One Nation, off the mark. Had preferences behaved as they did in 2010, the Coalition would have scored an extra 1%, together with an extra handful of seats. Morgan’s exactitude on two-party preferred is thus matched by its relatively soft scores for primary vote accuracy, on which live-interview phone polling once again justified its reputation as the most reliable method.

Line honours on this count go to Newspoll, which had the edge over its competitors because it over-stated the Greens by the least amount. Galaxy equalled it on this score, but lost points by recording the Labor primary vote at 35% rather than 33% (the actual result being 33.4%) — which might merely come down to a bit of unluckiness on rounding. Comparison between the two is particularly interesting in that Galaxy, unlike Newspoll, now goes to the effort of including mobile phones in its sample. Evidence has once again failed to emerge for the supposed deficiency of landline-only polling, at least in the Australian context.

The other landline-only phone pollster, Nielsen, had a very good result that was marred slightly by the recurring problem of an inflated Greens vote. In this every pollster other than Newspoll and Galaxy scored a “fail” mark of less than 5%, in large part reflecting the low theoretical margin of error when only a small share of the vote is involved.

The less-good news is that localised polls tended to foreshadow Labor disasters that in many cases failed to eventuate. This is illustrated by the average error and accuracy ratings for each pollster in the table below. Since most electorate polls were conducted some time before the election itself, changes in the parties’ fortunes through the campaign period are accounted for by adjusting each poll in line with national poll tracking results at the time.

As most of this polling was of the automated phone variety, the election result was seen by many as a blow to the method’s credibility. However, considering the results in their totality points to a more complicated picture.

Not only did automated pollsters like ReachTEL perform much better in their national polling, such errors as were recorded there ran in the opposite direction, with the Coalition coming in too low rather than too high.

Secondly, Galaxy bucked the trend by performing creditably with its automated electorate-level polling, despite a tendency to understate the “others” vote — and even that may just have been an artefact of the late-campaign rise of the Palmer United Party. Thirdly, a bias to the Coalition was evident even in the normally reliable Newspoll, which uniquely of the six conducted live interview rather than automated polling.

The biggest Coalition blowouts were recorded by newcomer Lonergan, but it should be noted that there are only three polls to go on here, and each apparently quirky result was matched by another pollster that targeted the seats in question at the same time (JWS Research in the case of Forde and Lindsay, Newspoll in the case of Griffith).

Amid a generally confusing picture, one point emerges clearly: the increasingly popular endeavour of targeting key electorates with automated phone polls, most of which are conducted in a single evening, is yet to prove its worth.