Last week Charles Richardson wrote in Crikey about an Age report of a Galaxy poll on gay marriage.
Richardson said:
Three years ago when Newspoll surveyed attitudes to gay marriage, it found a slight majority opposed: 44% to 38%, with 18% undecided. But Galaxy now report a huge swing in favour, with supporters outnumbering opponents 57% to 37% . . . a movement like that in just three years is simply not credible. One of those polls is wrong.
If you have to choose, Newspoll is more likely to be right. Firstly, because we already know from last week’s debate that Galaxy engages in some dubious practices. Secondly, because this particular poll was commissioned by an organization , Get Up!, with a strong vested interest in the result.’
We suggest these are two gratuitous comments which we could do without.
Certainly we criticised a Galaxy question recently. This does not mean that all Galaxy questions are suspect. And for an organization to publish a result it is pleased with does not make the result wrong.
But here the plot thickens. Both The Age and the Courier Mail carried the Galaxy Poll story. The Age said it was conducted the previous weekend among 1100 people (no further information:(who?, where?), and did not state what question or questions were asked. It merely offered some results.
The Courier Mail said nothing about the sample, when the survey was conducted or the questions used, except to say that “The poll found support for equal rights for gay couples was widespread, with majorities of all demographics surveyed in favour.” Not good enough.
Neither the Galaxy web site nor the Get Up! site give any more information about the poll, although from the Galaxy site one may infer that it was conducted on their omnibus, which is conducted ‘among a representative sample of 1100 Australians’ (no further detail)
Surely polls have now been published for enough years for the press and their reporters, and the polling companies themselves, to know that a certain minimum information about the poll should always be provided to readers: when it was conducted, among whom, by what method, and the questions asked.
The likeliest explanation for the difference in the results is that the questions used in each poll were different. Unless we know what the questions were, drawing comparisons is mere speculation.
Charles Richardson made a further point with which we take issue: that attitudinal polls where there is no external check on the accuracy of the results are suspect, or as he says: “They could be getting it horribly wrong and no one would ever know”.
Certainly polls are not infallible, and bad or biased questions produce bad or biased results, but we can say this. In state and federal elections, where poll results can be compared with actual public opinion, the main opinion polls published in the Australian media have a good record of accuracy – not perfect, but good.
From our experience, we would also assert that when carefully thought out attitudinal questions are asked by reputable and competent pollsters, the results are more likely than not to reflect the views of the population surveyed. We must be careful not to throw out the baby with the bathwater.
Crikey is committed to hosting lively discussions. Help us keep the conversation useful, interesting and welcoming. We aim to publish comments quickly in the interest of promoting robust conversation, but we’re a small team and we deploy filters to protect against legal risk. Occasionally your comment may be held up while we review, but we’re working as fast as we can to keep the conversation rolling.
The Crikey comment section is members-only content. Please subscribe to leave a comment.
The Crikey comment section is members-only content. Please login to leave a comment.