Monday, July 15, 2013

When models fail us

The change of prime minister has had an unexpected impact on the polling data and on my attribution of house effects. This impact makes the polls particularly difficult to interpret at the moment.

The critical question is how much of a bounce did Kevin Rudd bring to the polling fortunes of the Labor party. To answer this question, I will look at the average two-party preferred Labor vote share for each polling house. I will take the polls in May and June (prior to 26 June) and compare them with the polls since then.

HouseBeforeAfterBounce
Essential45.3 (n=7)48.0 (n=3)2.7
Galaxy45.5 (n=2)49.0 (n=1)3.5
Nielsen44.5 (n=2)50.0 (n=1)5.5
ReachTEL42.0 (n=1)48.0 (n=1)6.0
Newspoll43.3 (n=4)49.5 (n=2)6.2
Morgan (multi)44.5 (n=8)51.7 (n=3)7.2

While this is a little rough and ready (some might say arbitrary), it reveals substantial differences of view between the polling houses on the boost Kevin Rudd's return gave Labor.

Of note, Essential has gone from being among the most Labor leaning polling house to among the most Coalition leaning House.

The discontinuity model I had been using in recent weeks assumed the polling bias during the reigns of prime ministers Gillard and Rudd remained much the same. Clearly this is not the case. While I now have serious doubt about the utility of these charts, for reasons of historical continuity, they follow:




If we limit our analysis to the data since the second ascension of Kevin Rudd, the story is a little different. This analysis suggests a 75 per cent chance that the Coalition has 50 per cent or more of the TPP vote share, and a 25 per cent chance that Labor is in front. 




At this stage I would urge some caution in interpreting the second Rudd-era polls. As more polling data becomes available, we will be able to better calibrate our models.

2 comments:

  1. Yes I have also noted some differences between the apparent biases of the different pollsters.

    Can you not extend your analysis above to calculate confidence intervals on the difference? I think this would be illuminating given the significantly different sample sizes.

    With the exception of Essential I don't think (based on eyeballing the data) that there's any evidence that the biases are inconsistent. Galaxy number is perhaps a little low but given the small number of samples not inconsistent. I'd be amazed if you can show with n=2 before and n=1 after that the jump factor is statistically significant to any of the others. When you calculate the subtraction, the errors will add in quadrature of course.

    The Essential result is troubling but almost certainly reflects the fact that Essential's results are based on panel sampling. Kevin Bonham discussed this in his recent post - the composition of the (relatively small) panel may differ significantly from the general population and so population level effects may not appear proportionately in the panel.

    If you really do try to do random sampling, as Newspoll and others do, you shouldn't see any significant change in the bias at this level of accuracy.

    Not sure if you saw my second-to-last post, but I looked at the internal consistency of the different polls. My model re-weights the results to account for this, and in particular finds that the Morgan multi-mode poll should be downweighted significantly. If you want to do my difference calculation as suggested above for Morgan, you should calculate the reduced chi squared (chisq_nu) about the weighted mean for the "before" sample and then multiply all of the standard errors by sqrt(chisq_nu). This will act as a first order correction for the overdispersion. Let me know if this doesn't make sense...

    I'm sufficiently concerned about the Essential results that I may update the model to remove all Gillard era Essential polls to recalculate the bias.

    ReplyDelete
  2. great comment.

    you will be in every week until the election is over

    ReplyDelete