Pages

Monday, March 14, 2016

Ipsos 53-47 (and a rant on polling methodology)

The Fairfax-Ipsos poll for March has been released. It estimated the national two-party preferred vote share for the Coalition at 53 per cent (a one percentage point improvement on the February poll).


Adding this poll to my aggregation sees no change. The Coalition's national two-party preferred vote share remains estimated at 52.3 per cent.



I saw quite a lot of twitter outrage on this poll as I went to bed last night. The outraged were asking, how could Morgan and Ipsos have the Coalition at 53 per cent, when Newspoll-2 and Essential have it at 50 per cent? The outraged were disinclined to believe the latest Ipsos poll.

I must admit my biases run the other way. Newspoll-2 and Essential do not appear to behave in a manner that is entirely consistent with statistical theory (with the caveat that Newspoll-2 is relatively new and the number of observations in this series is small). Both appear a little under-dispersed. There is less variation in the national population estimates from poll to poll than you would expect for the reported sample sizes, assuming the sampling frame is based on a random selection from the entire population. I am disquieted by this under-dispersion. It suggests there may be issues with the sampling methodology.

The magic of making statistical inferences for the entire population based on a small sample only works when that sample has been randomly selected from the entire population. The maxim I apply is when the poll results are too consistent from one poll to the next (for the sample sizes), then the poll results may be too good to be true.

No polling house achieves the gold standard of a randomly selected sample.

The use of telephone polling axiomatically excludes those voters without a phone. It may also exclude those whose phone numbers are unlisted. It is likely (for example) that voters in nursing homes will be under-represented. Excluding mobile phones will reduce representation from those aged under 35 years (especially those living in share houses without a land-line). But including mobile phones might see those young people living with their parents over represented (as they can be accessed by their parents land-line and their mobile phone). Not withstanding these problems, with good sample design, telephone polling gets closer to the gold standard than other approaches.

More challenging is the two-stage sampling frame when internet polling a panel that has been drawn from the whole population. The process of identifying and selecting such a panel is likely to result in biases within the panel. The selection bias here is that people interested in politics and people with good internet access are more likely to become panel members. This selection bias will then be magnified into the final poll results (especially if some panel members are regularly re-polled over time). While a pollster can normalise the internet panel results to the entire population (using the results of a general election or by running separate telephone polls), there is no guarantee that when there is a shift in voter sentiment, this shift will be equally reflected in the non-randomly selected internet panel.

The attraction of internet polling is cost. An internet panel poll is much cheaper to run than a phone poll (especially a human interfaced phone poll). But the downside for the cost equation may be an unquantifiable sampling error (which may move randomly over time as the internet panel is refreshed).

Why does this matter? In the United Kingdom last year they had a complete failure of opinion polling in the lead up to the general election. Pollsters in that case did not have representative samples. They did not include sufficient older voters in their samples, and their samples included too many politically active younger people (The preliminary findings of the inquiry are here, which have been reported in the Guardian here and the Telegraph here). The bifurcation we are seeing in the polls at the moment may reflect similar issues in Australian polling.

[For more information, see the links at the bottom of this page.]

Now that I have got that off my chest, let's turn to the other headline, which is the decline in the attitudinal polling for Turnbull (which has been more consistent).




Which can be aggregated as follows:




Links


1 comment: