For the opinion polling industry, Denmark became an apt new entry last fortnight on the growing list of states in which something is rotten.

Following similar shocks in Britain, Poland and Israel, Danish pundits were confounded when the national election on June 18 singularly failed to deliver the result that polling had led them to anticipate.

The surprise related to the strong performance of the anti-immigration Danish People’s Party, which defied perceptions of the country as a beacon of liberal sentiment by polling 21% of the national vote.

On the other side of the North Sea, the British Polling Council is conducting an inquiry into its collective failure to call the clear Conservative Party victory at the May 7 general election, the causes of which no one seems to agree on.

A similar exercise is underway in Poland after presidential incumbent Bronislaw Komorowski, of the centre-right Civic Platform, went down in an unexpected defeat in May at the hands of the unheralded Andrzej Duda, leader of the yet more conservative Law and Justice Party.

The ball on polling’s annus horribilis was set rolling on March 17 in the uniquely sensitive context of Israel, where Benjamin Netanyahu and Likud cruised to a comfortable re-election in defiance of polls that had showed the situation to be on a knife edge.

Amid ongoing speculation that Tony Abbott could seek to capitalise on a perceived change in the political breeze by springing an early election on Bill Shorten, this series of failures gives cause to question if he might end up doing so on the basis of misplaced assumptions.

According to British polling maven Anthony Wells of YouGov, reviews of pollster failure have three alternatives to consider: “Did we interview the right people and they told us the truth, but then they changed their mind; or did people tell us the truth, but we interviewed the wrong people; or did we interview the right people and they lied to us?”

Pollsters may have a weakness for the first explanation, since it allows them to argue that their polls were accurate at the time they were conducted, which is all that can be asked of them.

This refrain was heard in Australia from Roy Morgan after it returned a false positive on a Labor victory at the 2001 federal election, and by Patterson Market Research when it appeared to dramatically overstate Labor support in the only poll published before the Australian Capital Territory election in 2012.

Those with a similar tale to tell in Britain at the moment argue that the Conservatives achieved spectacular success through a late-campaign drive to sow fears among voters in England about a minority Labour government being propped up by the Scottish National Party.

“Modern consumers have become immensely more difficult for pollsters to pin down …”

Since the collective error was largely a matter of failing to predict the balance of Conservative and Labour support in England, it’s entirely possible that there’s something in this. But it’s hard to believe that it could single-handedly explain a failure of such magnitude, particularly given that polls conducted on the eve of the election offered no sign of such an effect.

And needless to say, Scottish nationalism is of no use in explaining what went wrong in Denmark, Poland and Israel.

The notion of pollsters being deceived by their respondents typically relates to an unwillingness of those intending to vote right-of-centre to own up to the fact. The “shy Tory” effect is routinely evoked in discussion of polling in Britain, but it encounters the difficulty that online polls performed no better in May than those administered by interviewers.

Australian experience provides a notable lack of support for the theory, since election night surprises — think Paul Keating in 1993, Steve Bracks in 1999 and Annastacia Palaszczuk in February — have tended to be on the upside for Labor.

More familiar is Denmark’s recent experience of an anti-immigration party outperforming the polls, since this was what happened with Pauline Hanson’s One Nation during its heyday from 1998 to 2001.

However, this could equally be accounted for by the scenario with the most troubling implications for the polling industry, the one categorised by Anthony Wells as “interviewing the wrong people”.

On this theory, those who agree to participate in phone surveys, or join the panels from which online or SMS poll respondents are drawn, are not representative of the electorate of the whole. Tellingly, this would help explain any apparent decrease in reliability over time, due to changes in behaviour ushered in by the age of ubiquitous mobile communications.

Having grown accustomed to screening out many types of unwelcome contact, from telemarketing to spam, modern consumers have become immensely more difficult for pollsters to pin down — a fact powerfully illustrated by Pew Research in the United States, which found survey co-operation rates fell from 43% to 14% between 1997 and 2012.

This research also showed up the intuitively obvious finding that the diminished pool of respondents consists largely of the politically engaged, which would go a long way towards explaining bias against anti-immigration parties. In addition to the three possible explanations for pollster failure just discussed, Dan Hodges of Britain’s Daily Telegraph provocatively offers a fourth: that “the pollsters lied to us”.

Such is Hodges’ uncharitable appraisal of the phenomenon of “herding”, in which pollsters who find themselves out of step with their peers revisit their methodology to produce more orthodox results.

These debates have not recently been in the spotlight in Australia, where pollsters have, in the main, performed to a high standard at least over the past decade.

But given the polling industry’s present state of flux, with long-established leading lights Newspoll and Nielsen both shutting up shop recently, there would seem to be a heightened risk that herding will cause deficiencies among one or two leading pollsters to infect the entire system.