Yesterday we had an Essential poll with Labor on 50 per cent 2pp, the Coalition on 45 per cent and 5 per cent undecided (which I make as 52.6 to 47.4 percent with the undecideds distributed). This brings my aggregated poll of pollster 2pp estimates to 51.8 per cent in Labor's favour.
Which brings me to thinking about polling error. In the 2019 Federal election the polls were suggesting Labor would win 51.5 per cent of the 2pp vote. Labor's 2pp at the 2019 election was 48.5 per cent, a three percentage-point polling error. While the scale of the polling error in 2019 was higher than usual, it is a reminder that while the polls are often pretty good at forecasting the final result, sometimes (rarely) the polls get it badly wrong.
In respect of the individual seats (all seem to have odds this morning), we can see a small movement to Labor in aggregate.
Some of the more interesting seat movements include the following charts. Note with Cowper below, there were no odds for the Independent yesterday, and at the moment I don't manage the absent data well (but I will fix in the next few days). Dickson below is Peter Dutton's seat.
Mark/Bryan, thanks for all the work on this analysis. Any thoughts on why your aggregate estimated 2PP for Labor is a currently about a point higher than the Guardian/Mansillo/Jackman poll tracker (51.8 vs 50.7) given both are using similar Bayesian methodology if I understand correctly? One difference seems to be house effects which in your analysis are sum to zero for major pollsters while in Guardian's model it seems all pollsters are assumed or estimated to systemically underestimate Coalition support. Have I got that right? MD.
ReplyDeleteGood question. The Bayesian methodology requires you to specify a model, which is an assumption about how the world operates, and then fit the data to that model. We are both using a similar model (based on the work of Simon Jackman).
ReplyDeleteI have not looked in detail at their underlying assumptions, but I can talk about mine. (1) I have not anchored my model to the previous election, rather I have anchored it to the average of pollsters with a significant number of polls in the market place. (2) I have focused on restarting a series where there is a publicly announced or apparent methodology change. I exclude the former series from anchoring. The House-Effects part of the model that both of us use assumes no methodology changes. [Note: there have been some substantial unexplained methodology changes in the 2022-25 polling period]. (3) I ignore published sample sizes. They are not consistently reported between pollsters, and may include a lot of statistical treatment from the pollsters. Back in the day, when polls were really polls, they were always more noisy than the published sample size. These days, the polls are largely panel driven mechanical turks, and the degree of noise in some election cycles is very low (eg. 2019), and in others higher.
Thank you!
Delete