Now that this polling cycle is complete, we can update our Pollytrend to see whether the underlying trend in the cumulative polling results have changed.
Throwing in the new data for this fortnight and comparing this cycle’s Pollytrend with where our trend line was last fortnight gives us:
This cycle we’ve had Morgan Phone come in up 1 to the ALP, Essential Media up 2, Morgan Face to Face up 3 and Newspoll steady on the two party preferred. As a consequence, our polling trend line has continued its push upward for the Labor Party and the algorithm slightly recalibrated its position over the last month to smooth out this general trend line change.
Something you might be interested in is a quick squiz at the backend data I use to calibrate this. The first calculation made is to change the raw polling data into a series that is an an all pollster rolling average weighted by sample size. As a new poll comes in, the result of that new poll replaces that pollster’s previous result in the poll average – this way trend changes can only occur on the basis of comparing like with like – or rather one pollsters results against their own previous results.
This stops us from comparing one pollster against another and knocks out any house effects that may be floating around (such as Morgan Face to Face polling generally having a slight lean to the ALP compared to, say, Newspoll). Essentially, the trend will only change when a majority of pollsters bring in poll results that show movement in the same direction. The reason it’s weighted by sample size is to reflect the higher levels of certainty that occur with larger sample sizes than small ones – which helps us cut through the noise of sampling volatility.
So even though their might be house effects with the pollsters that impact on their level (or any given two party preferred result), weighting by sample size makes the assumption that the house effect of each pollster is relatively consistent – so that, for instance, a Morgan Face to Face poll leaned to the Labor party compared to a Newspoll the same amount today as it did at any other given period of time since the last election. This way we can be more certain that a change in a given pollsters consecutive poll results where both polls have large sample sizes, contains more certainty (in that we can be more certain that the true level of public opinion actually changed) than if consecutive polls had small sample sizes.
Once this rolling all pollster poll average has been created, a quite aggressive locally weighted polynomial regression is run through the data – a LOESS regression. This is what the actual all pollster rolling average (blue circles) with the regression line we use for our Pollytrend looks like.
Any trends become quite prominent.
We need to remember though that the important thing about Pollytrend is the movement, not it’s level. We don’t actually know whether or not our Pollytrend line is above or below the true underlying level of public political opinion – but we do know that the methodologies of the various pollsters remain fairly consistent, which means that even if the actual pollytrend line is a point or two above or below the real underlying value of public opinion, it should always be that same point or two above or below the real underlying value, over any given period.
The change in direction will still be true – which is what Pollytrend is designed to measure.
Crikey is committed to hosting lively discussions. Help us keep the conversation useful, interesting and welcoming. We aim to publish comments quickly in the interest of promoting robust conversation, but we’re a small team and we deploy filters to protect against legal risk. Occasionally your comment may be held up while we review, but we’re working as fast as we can to keep the conversation rolling.
The Crikey comment section is members-only content. Please subscribe to leave a comment.
The Crikey comment section is members-only content. Please login to leave a comment.