## Friday, December 6, 2019

### Aggregated attitudinal polling

At this point in the election cycle, only Newspoll is publishing primary vote share and two-party preferred population estimates. So there is nothing to aggregate across polling houses when it comes to voting intention. However, both Essential and Newspoll are publishing attitudinal polling. So I decided to build a Dirichlet-multinomial process model to see what trends there are in the attitudinal polling since the 2019 election.

First, however, we will look at the output from the model, before looking at the model itself.

Let's begin with the preferred prime minister polling. We see a small dip in the proportion of the population preferring the Prime Minister over the period (from 45.4 to 44.9 per cent). The Opposition Leader has improved a little over the period (from 26 to 29 per cent), but he is much less preferred than the Prime Minister. The "undecideds" have declined a little (from 29 to 26 per cent).

The median lines from the above charts can be combined on a chart as follows.

The model allows us to compare house effects in preferred Prime Minister polling. Those polled by Essential are more likely to express a preference on their preferred prime minister compared with the other two houses.

The next set of charts are about satisfaction with the Prime Minister's performance. Satisfaction with the Prime Minister has declined from 48 to 45 per cent. Dissatisfaction has increased from 37 to 44 per cent.

Satisfaction with the Opposition Leader has improved from 37 to 38 per cent. Dissatisfaction has increased from 30 to 36 per cent. Undecideds have decreased from 32 to 25 per cent.

In summary, both leaders have seen a decline in net satisfaction. On this metric, the Prime Minister has fallen further than the Opposition Leader. The Opposition Leader ends the year with a higher net satisfaction rating compared with the Prime Minister.

The model that produced the above charts is as follows.

// STAN: Simplex Time Series Model
//  using a Dirichlet-multinomial process

data {
// data size
int<lower=1> n_polls;
int<lower=1> n_days;
int<lower=1> n_houses;
int<lower=1> n_categories;

// key variables
int<lower=1> pseudoSampleSize; // maximum sample size for y
real<lower=1> transmissionStrength;

// give a rough idea of a staring point ...
simplex[n_categories] startingPoint; // rough guess at series starting point
int<lower=1> startingPointCertainty; // strength of guess - small number is vague

// poll data
int<lower=0,upper=pseudoSampleSize> y[n_polls, n_categories]; // a multinomial
int<lower=1,upper=n_houses> house[n_polls]; // polling house
int<lower=1,upper=n_days> poll_day[n_polls]; // day polling occured
}

parameters {
simplex[n_categories] hidden_voting_intention[n_days];
}

transformed parameters {
for(p in 1:n_categories) // included parties sum to zero
for(h in 1:n_houses) // included parties sum to zero
}

model{
// -- house effects model
for(h in 1:n_houses)

// -- temporal model
hidden_voting_intention[1] ~ dirichlet(startingPoint * startingPointCertainty);
for (day in 2:n_days)
hidden_voting_intention[day] ~
dirichlet(hidden_voting_intention[day-1] * transmissionStrength);

// -- observed data model
for(poll in 1:n_polls)
y[poll] ~ multinomial(hidden_voting_intention[poll_day[poll]] +
}

The model assumes that house effects sum to zero (both across polling houses and across the simplex categories). I set the startingPointCertainty variable to 10. The prior on the startingPoints is 0.333 for each series. The day-to-day transmissionStrength is set to 50,000 (attitudes yesterday are much the same as today). The pseudoSampleSize is set to 1000.

As usual, the data for this analysis has been sourced from Wikipedia.

## Saturday, June 29, 2019

### Three anchored models

I have three anchored models for the period 2 July 2016 to 18 May 2019. The first is anchored to the 2016 election result (left anchored). The second model is anchored to the 2019 election result (right anchored). The third model is anchored to both election results (left and right anchored).  Let's look at these models.

The first thing to note is that the median lines in the left-anchored and right-anchored models are very similar. It is pretty much the same line moved up or down by 1.4 percentage points. As we have discussed previously, this difference of 1.4 percentage points is effectively a drift in the collective polling house effects over the period from 2016 to 2019. The polls opened after the 2016 election with a collective 1.7 percentage point pro-Labor bias. This bias grew by a further 1.4 percentage points to reach 3.1 percentage points at the time of the 2019 election (the difference between the yellow line and the blue/green lines on the right hand side of the last chart above).

The third model: the left-and-right anchored model forces this drift to be reconciled within the model (but without any guidance from the model). The left-and-right anchored model explicitly assumes there is no such drift (ie. house effects are constant and unchanging). In handling this unspecified drift, the left-and-right anchored model has seen much of the adjustment occur close to the two anchor points at the left and right extremes of the chart. The shape of the middle of the chart is not dissimilar to the singularly anchored charts.

While this is the output for the left-and-right anchored model, I would advise caution in assuming that the drift in polling house effects actually occurred in the period immediately after the 2016 election and immediately prior to the 2019 election. It is just that this is the best mathematical fit for a model that assumes there has been no drift. The actual drift could have happened slowly over the entire period, or quickly at the beginning, somewhere in the middle, or towards the end of the three year period.

My results for the left-and-right anchored model are not dissimilar to Jackman and Mansillo. The differences between our charts are largely a result of how I treat the day-to-day variance in voting intention (particularly following the polling discontinuity associated with the leadership transition from Turnbull to Morrison). I chose to specify this variance, rather than model it as a hyper-prior.  I specified this parameter because: (a) we can observe higher volatility immediately following discontinuity events, and (b) the sparse polling results in Australia, especially in the 2016-19 period, produces an under-estimate for this variance in this model.

All three models have a very similar result for the discontinuity event itself: an impact just under three percentage points. Note: these charts are not in percentage points, but vote shares.

And just to complete the analysis, let's look at the house effects. With all of these houses effects, I would urge caution. These house effects are an artefact of the best fit in models that do not allow for the 1.4 percentage point drift in collective house effects that occurred between 2016 and 2019.

The three models are almost identical.
// STAN: Two-Party Preferred (TPP) Vote Intention Model
//     - Fixed starting-point

data {
// data size
int n_polls;
int n_days;
int n_houses;

// assumed standard deviation for all polls
real pseudoSampleSigma;

// poll data
vector[n_polls] y; // TPP vote share
int house[n_polls];
int day[n_polls];

// period of discontinuity event
int discontinuity;
int stability;

// election outcome anchor point
real start_anchor;
}

transformed data {
// fixed day-to-day standard deviation
real sigma = 0.0015;
real sigma_volatile = 0.0045;

// house effect range
real lowerHE = -0.07;
real upperHE = 0.07;

// tightness of anchor points
real tight_fit = 0.0001;
}

parameters {
vector[n_days] hidden_vote_share;
vector[n_houses] pHouseEffects;
real disruption;
}

model {
// -- temporal model [this is the hidden state-space model]
disruption ~ normal(0.0, 0.15); // PRIOR
hidden_vote_share[1] ~ normal(start_anchor, tight_fit); // ANCHOR

hidden_vote_share[2:(discontinuity-1)] ~
normal(hidden_vote_share[1:(discontinuity-2)], sigma);

hidden_vote_share[discontinuity] ~
normal(hidden_vote_share[discontinuity-1]+disruption, sigma);

hidden_vote_share[(discontinuity+1):stability] ~
normal(hidden_vote_share[discontinuity:(stability-1)], sigma_volatile);

hidden_vote_share[(stability+1):n_days] ~
normal(hidden_vote_share[stability:(n_days-1)], sigma);

// -- house effects model - uniform distributions
pHouseEffects ~ uniform(lowerHE, upperHE); // PRIOR

// -- observed data / measurement model
y ~ normal(pHouseEffects[house] + hidden_vote_share[day],
pseudoSampleSigma);
}


// STAN: Two-Party Preferred (TPP) Vote Intention Model
//     - Fixed end-point only

data {
// data size
int n_polls;
int n_days;
int n_houses;

// assumed standard deviation for all polls
real pseudoSampleSigma;

// poll data
vector[n_polls] y; // TPP vote share
int house[n_polls];
int day[n_polls];

// period of discontinuity event
int discontinuity;
int stability;

// election outcome anchor point
real end_anchor;
}

transformed data {
// fixed day-to-day standard deviation
real sigma = 0.0015;
real sigma_volatile = 0.0045;

// house effect range
real lowerHE = -0.07;
real upperHE = 0.07;

// tightness of anchor points
real tight_fit = 0.0001;
}

parameters {
vector[n_days] hidden_vote_share;
vector[n_houses] pHouseEffects;
real disruption;
}

model {
// -- temporal model [this is the hidden state-space model]
disruption ~ normal(0.0, 0.15); // PRIOR
hidden_vote_share[1] ~ normal(0.5, 0.15); // PRIOR

hidden_vote_share[2:(discontinuity-1)] ~
normal(hidden_vote_share[1:(discontinuity-2)], sigma);

hidden_vote_share[discontinuity] ~
normal(hidden_vote_share[discontinuity-1]+disruption, sigma);

hidden_vote_share[(discontinuity+1):stability] ~
normal(hidden_vote_share[discontinuity:(stability-1)], sigma_volatile);

hidden_vote_share[(stability+1):n_days] ~
normal(hidden_vote_share[stability:(n_days-1)], sigma);

// -- house effects model - uniform distributions
pHouseEffects ~ uniform(lowerHE, upperHE); // PRIOR

// -- observed data / measurement model
y ~ normal(pHouseEffects[house] + hidden_vote_share[day],pseudoSampleSigma);
end_anchor ~ normal(hidden_vote_share[n_days], tight_fit); //ANCHOR
}


// STAN: Two-Party Preferred (TPP) Vote Intention Model
//     - Fixed starting-point and end-point

data {
// data size
int n_polls;
int n_days;
int n_houses;

// assumed standard deviation for all polls
real pseudoSampleSigma;

// poll data
vector[n_polls] y; // TPP vote share
int house[n_polls];
int day[n_polls];

// period of discontinuity event
int discontinuity;
int stability;

// election outcome anchor point
real start_anchor;
real end_anchor;
}

transformed data {
// fixed day-to-day standard deviation
real sigma = 0.0015;
real sigma_volatile = 0.0045;

// house effect range
real lowerHE = -0.07;
real upperHE = 0.07;

// tightness of anchor points
real tight_fit = 0.0001;
}

parameters {
vector[n_days] hidden_vote_share;
vector[n_houses] pHouseEffects;
real disruption;
}

model {
// -- temporal model [this is the hidden state-space model]
disruption ~ normal(0.0, 0.15); // PRIOR
hidden_vote_share[1] ~ normal(start_anchor, tight_fit); // ANCHOR

hidden_vote_share[2:(discontinuity-1)] ~
normal(hidden_vote_share[1:(discontinuity-2)], sigma);

hidden_vote_share[discontinuity] ~
normal(hidden_vote_share[discontinuity-1]+disruption, sigma);

hidden_vote_share[(discontinuity+1):stability] ~
normal(hidden_vote_share[discontinuity:(stability-1)], sigma_volatile);

hidden_vote_share[(stability+1):n_days] ~
normal(hidden_vote_share[stability:(n_days-1)], sigma);

// -- house effects model - uniform distributions
pHouseEffects ~ uniform(lowerHE, upperHE); // PRIOR

// -- observed data / measurement model
y ~ normal(pHouseEffects[house] + hidden_vote_share[day],
pseudoSampleSigma);
end_anchor ~ normal(hidden_vote_share[n_days], tight_fit); //ANCHOR
}


Update: Kevin Bonham is also exploring what public voting intention might have looked like during the 2016-19 period.

## Tuesday, June 18, 2019

### Further polling reflections

I have been pondering on whether the polls have been out of whack for some time, or whether it was a recent failure (over the previous 3, 6 or (say) 12 months). In previous posts, I looked at YouGov in 2017, and at monthly polling averages prior to the 2019 election.

Today I want to look at the initial polls following the 2016 election. First, however, let's recap the model I used for the 2019 election. In this model, I excluded YouGov and Roy Morgan from the sum-to-zero constraint on house effects. I have added a starting point reference to these charts (and increased the rounding on the labels from one decimal place to two. However, I would caution on reading these models to two decimal places, the models are not that precise).

What is worth noting is that this series opens on 6 July 2016 some 1.7 percentage points down from the election result of 50.36 per cent of the two-party preferred (TPP) vote for the Coalition on 2 July 2016. The series closes some 3.1 percentage points down from the 18 May 2019 election result. It appears that the core-set of Australian pollsters started some 1.7 percentage points off the mark, and collectively gained a further 1.4 percentage points of error over the period from July 2016 to May 2019.

These initial polls are all from Essential, and they are under-dispersed. (We discussed the under-dispersion problem here, here, here, and here. I will come back to this problem in a future post). The first two Newspolls were closer to the election result, but they then aligned with Essential from then on. The Newspolls from this period are also under-dispersed.

We can see how closely Newspoll and Essential tracked each other on average from the following chart of average house effects. I have Newspoll twice in this chart, based on the original method for allocating of preferences, and (Newspoll2) for the revised allocation of One Nation preferences from late in 2017.

If I had aggregated the polls prior to the 2019 election by anchoring the line to the previous election, I would have achieved a better estimate of the Coalition's performance than I did. Effectively I would have predicted a tie or a very narrow Coalition victory if I had aggregated the polls for this election with an anchor to the previous election.

A good question to ask at this point is why did I not anchor the model to the previous election? The short answer is that I have watched a number of aggregators in past election cycles use an anchored model and end up with worse predictions than those who assumed the house effects across the pollsters cancel each other out on average. I have also assumed that pollsters use elections to recalibrate their polling methodologies, and this recalibration represents a series break. A left-hand side anchored series assumes there have been no series breaks.

In summary, at least 1.7 percentage points of polling error were baked in from the very first polls following the 2016 election. Over the period since July 2016, this error has increased to 3.1 percentage points.

Wonky note: For the anchored model, I changed the priors on house effects from weakly informative normals centred on zero, to uniform priors in the range -6% to +6%. I did this because the weakly informative priors were dragging the aggregation towards the centre of the data points.

The anchored STAN model code follows.
// STAN: Two-Party Preferred (TPP) Vote Intention Model
//     - Updated to for fixed starting point

data {
// data size
int n_polls;
int n_days;
int n_houses;

// assumed standard deviation for all polls
real pseudoSampleSigma;

// poll data
vector[n_polls] y; // TPP vote share
int house[n_polls];
int day[n_polls];

// period of discontinuity event
int discontinuity;
int stability;

// previous election outcome anchor point
real election_outcome;
}

transformed data {
// fixed day-to-day standard deviation
real sigma = 0.0015;
real sigma_volatile = 0.0045;

// house effect range
real lowerHE = -0.06;
real upperHE = 0.06;
}

parameters {
vector[n_days] hidden_vote_share;
vector[n_houses] pHouseEffects;
real disruption;
}

model {
// -- temporal model [this is the hidden state-space model]
disruption ~ normal(0.0, 0.15); // PRIOR
hidden_vote_share[1] ~ normal(election_outcome, 0.00001);

hidden_vote_share[2:(discontinuity-1)] ~
normal(hidden_vote_share[1:(discontinuity-2)], sigma);

hidden_vote_share[discontinuity] ~
normal(hidden_vote_share[discontinuity-1]+disruption, sigma);

hidden_vote_share[(discontinuity+1):stability] ~
normal(hidden_vote_share[discontinuity:(stability-1)], sigma_volatile);

hidden_vote_share[(stability+1):n_days] ~
normal(hidden_vote_share[stability:(n_days-1)], sigma);

// -- house effects model
pHouseEffects ~ uniform(lowerHE, upperHE); // PRIOR

// -- observed data / measurement model
y ~ normal(pHouseEffects[house] + hidden_vote_share[day],
pseudoSampleSigma);
}


## Saturday, June 8, 2019

### Was YouGov the winner of the 2016-19 polling season?

I have been wondering whether the pollsters have been off the mark for some years, or whether this is something that emerged recently (say, since Morrison's appointment as Prime Minister or since Christmas 2018). Today's exploration suggests the former: The pollsters have been off the mark for a number of years this electoral cycle.

Back in June 2017, international pollster YouGov appeared on the Australian polling scene with what looked like a fairly implausible set of poll results. The series was noisy, and well to the right of the other polling houses at the time. Back then, most pundits dismissed YouGov as a quaint curiosity.

 Date Firm Primary % TPP % L/NP ALP GRN ONP OTH L/NP ALP 7-10 Dec 2017 YouGov 34 35 11 8 13 50 50 23-27 Nov 2017 YouGov 32 32 10 11 16 47 53 14 Nov 2017 YouGov 31 34 11 11 14 48 52 14-18 Sep 2017 YouGov 34 35 11 9 11 50 50 31 Aug - 4 Sep 2017 YouGov 34 32 12 9 13 50 50 17-21 Aug 2017 YouGov 34 33 10 10 13 51 49 20-24 Jul 2017 YouGov 36 33 10 8 13 50 50 6-11 Jul 2017 YouGov 36 33 12 7 12 52 48 22-27 Jun 2017 YouGov 33 34 12 7 14 49 51

The 2017 YouGov series was short-lived. In December 2017, YouGov acquired Galaxy, which had acquired Newspoll in May 2015. YouGov ceased publishing poll results under its own brand. Newspoll continued without noticeable change. By the time of the 2019 election, these nine YouGov polls from 2017 had been long forgotten.

Today's thought experiment: What if those nine YouGov polls were correct (on average)? I can answer this question by changing the Bayesian aggregation model so that is centred on the YouGov polls, rather assuming the house affects across a core-set of pollsters sum to zero. Making this change yields a final poll aggregate of 51.3 per cent for the Coalition. This would have been remarkably prescient of the final 2019 election outcome (51.5 per cent).

The house effects in this model are as follows.

And if we adjust the poll results for the median house effects identified in the previous chart, we get a series like this.

YouGov is a reliable international polling house. It gets a B-grade from FiveThirtyEight. When it entered the Australian market in 2017, YouGov produced poll results that were on average up to 3 percentage points to the right of the other pollsters. The election in 2019 also produced a result that was around 3 percentage points to the right of the pollsters. That a respected international pollster can enter the Australian market and produce this result in 2017, suggests our regular Australian pollsters may have been missing the mark for quite some time.

Note: as usual, the above poll results are sourced from Wikipedia.

## Sunday, June 2, 2019

### More random reflections on the 2019 polls

Over the next few months, I will post some random reflections on the polls prior to the 2019 Election, and what went wrong. Today's post is a look at the two-party preferred (TPP) poll results over the past 12 months (well from 1 May 2018 to be precise). I am interested in the underlying patterns: both the periods of polling stability and when the polls changed.

With blue lines in the chart below, I have highlighted four periods when the polls look relatively stable. The first period is the last few months of the Turnbull premiership. The second period is Morrison's new premiership for the remainder of 2018. The third period is the first three months (and ten days in April) of 2019, prior to the election being called. The fourth and final period is from the dissolution of Parliament to the election. What intrigues me is the relative polling stability during each of these periods, and the marked jumps in voting intention (often over a couple of weeks) between these periods of stability.

To provide another perspective, I have plotted in red the calendar month polling averages. For the most part, these monthly averages stay close to the four period-averages I identified.

The only step change that I can clearly explain is the change from Turnbull to Morrison (immediately preceded by the Dutton challenge to Turnbull's leadership). This step change is emblematic of one of the famous aphorisms of Australian politics: disunity is death.

It is ironic to note that the highest monthly average for the year was 48.8 per cent in July 2018 under Turnbull. It is intriguing to wonder whether the polls were as out of whack in July 2018 as they were in May 2019 (when they collectively failed to foreshadow a Coalition TPP vote share at the 2019 election in excess of 51.5 per cent). Was Turnbull toppled for electability issues when he actually had 52 per cent of the TPP vote share?

The next step change that might be partially explainable is the last one: chronologically, it is associated with the April 2 Budget followed by the calling of the election on 11 April 2019. The Budget was a classic pre-election Budget (largess without nasties), and calling the election focuses the mind of the electorate on the outcome. However, I really do not find this explanation satisfying. Budgets are very technical documents, and people usually only understand the costs and benefits when they actually experience them. Nothing in the Budget was implemented prior to the election being called.

I am at a loss to explain the step change over the Christmas/New-Year period at the end of 2018 and the start of 2019. It was clearly a summer of increasing content with the government.

I am also intrigued by the question of whether the polls have been consistently wrong over this one-year period, or whether the polls have increasingly deviated from the population voting intention as they failed to fully comprehend Morrison's improved polling position over recent months.

Note: as usual I am relying on Wikipedia for the Australian opinion polling data.

## Thursday, May 23, 2019

### Further analysis of poll variance

The stunning feature of the opinion polls leading up to the 2019 Federal Election is that they did not look like the statistics you would expect from independent, random-sample polls of the voting population. All sixteen polls were within one percentage point of each other. As I have indicated previously, this is much more tightly clustered than is mathematically plausible. This post explores that mathematics further.

### My initial approach to testing for under-dispersion

One of the foundations of statistics is the notion that if I draw many independent and random samples from a population, the means of those many random samples will be normally distributed around the population mean (represented by the Greek letter mu $\mu$). This is known as the Central Limit Theorem or the Sampling Distribution of the Sample Mean. In practice, the Central Limit Theorem holds for samples of size 30 or higher.

The span or spread of the distribution of the many sample means around the population mean will depend on the size of those samples, which is usually denoted with a lower-case $n$. Statisticians measure this spread through the standard deviation (which is usually denoted by the Greek letter sigma $\sigma$). With the two-party preferred voting data, the standard deviation for the sample proportions is given by the following formula:

$$\sigma = \sqrt{\frac{proportion_{CoalitionTPP} * proportion_{LaborTPP}}{n}}$$

While I have the sample sizes for most of the sixteen polls prior to the 2019 Election, I do not have the sample size for the final YouGov/Galaxy poll. Nor do I have the sample size for the Essential poll on 25–29 Apr 2019. For analytical purposes, I have assumed both surveys were of 1000 people. The sample sizes for the sixteen polls ranged from 707 to 3008. The mean sample size was 1403.

If we take the smallest poll, with a sample of 707 voters, we can use the standard deviation to see how likely it was to have a poll result in the range 48 to 49 for the Coalition. We will need to make an adjustment, as most pollsters round their results to the nearest whole percentage point before publication.

So the question we will ask is if we assume the population voting intention for the Coalition was 48.625 per cent (the arithmetic mean of the sixteen polls), what is the probability of a sample of 707 voters being in the range 47.5 to 49.5, which would round to 48 or 49 per cent?

For samples of 707 voters, and assuming the population mean was 48.625, we would only expect to see a poll result of 48 or 49 around 40 per cent of the time. This is the area under the curve from 47.5 to 49.5 on the x-axis when the entire area under the curve sums to 1 (or 100 per cent).

We can compare this with the expected distribution for the largest sample of 3008 voters. Our adjustment here is slightly different, as the pollster, in this case, rounded to the nearest half a percentage point. So we are interested in the area under the curve from 47.75 to 49.25 per cent.

Because the sample size ($n$) is larger, the spread of this distribution is narrower (compare the scale on the x-axis for both charts). We would expect almost 60 per cent of the samples to produce a result in the range 48 to 49 if the population mean ($\mu$) was 48.625 per cent.

We can extend this technique to all sixteen polls. We can find the proportion of all possible samples we would expect to generate a published poll result of 48 or 49. We can then multiply these probabilities together to get the probability that all sixteen polls would be in the range. Using this method, I estimate that there is a one in 49,706 chance that all sixteen polls should be in the range 48 to 49 for the Coalition (if the polls were independent random samples of the population, and the population mean was 48.625 per cent).

### Chi-squared goodness of fit

Another approach is to apply a Chi-squared ($\chi^2$) test for goodness of fit to the sixteen polls. We can use this approach because the Central Limit Theorem tells us that the poll results should be normally distributed around the population mean. The Chi-squared test will tell us whether the poll results are normally distributed or not. In this case, the formula for the Chi-squared statistic is:

$$\chi^2 = \sum_{i=1}^k {\biggl( \frac{x_i - \mu}{\sigma_i} \biggr)}^2$$

Let's step through this equation. It is nowhere near as scary as it looks. To calculate the Chi-squared statistic, we do the following calculation for each poll:
• First, we calculate the mean deviation for the poll by taking the published poll result ($x_i$) and subtracting the population mean $\mu$, which we estimated using the arithmetic mean for all of the polls.
• We then divide the mean deviation by the standard deviation for the poll ($\sigma_i$), and then we
• square the result (multiply it by itself) - this ensures we get a positive statistic in respect of every poll.
Finally, we sum these ($k=16$) squared results from each of the polls.

If the polls are normally distributed, the absolute difference between the poll result and the population mean (the mean deviation) should be around one standard deviation on average. For sixteen polls that were normally distributed around the population mean, we would expect a Chi-squared statistic around the number sixteen.

If the Chi-squared statistic is much less than 16, the poll results could be under-dispersed. If the Chi-squared statistic is much more than 16, then the poll results could be over-dispersed. For sixteen polls (which have 15 degrees of freedom, because our estimate for the population mean ($\mu$) is constrained by and comes from the 16 poll results), we would expect 99 per cent of the Chi-squared statistics to be between 4.6 and 32.8.

The Chi-squared statistic I calculate for the sixteen polls is 1.68, which is much less than the expected 16 on average. I can convert this 1.68 Chi-squared statistic to a probability for 15 degrees of freedom. When I do this, I find that if the polls were truly independent and random samples, (and therefore normally distributed), there would be a one in 108,282 chance of generating the narrow distribution of poll results we saw prior to the 2019 Federal Election. We can confidently say the published polls were under-dispersed.

Note: If I was to use the language of statistics, I would say our null hypothesis ($H_0$) has the sixteen poll results normally distributed around the population mean. Now if the null hypothesis is correct, I would expect the Chi-squared statistic to be in the range 4.6 and 32.8 (99 per cent of the time). However, as our Chi-squared statistic is outside this range, we reject the null hypothesis for the alternative hypothesis ($H_a$) that collectively, the poll results are not normally distributed.

### Why the difference?

It is interesting to speculate on why there is a difference between these two approaches. While both approaches suggest the poll results were statistically unlikely, the Chi-squared test says they are twice as unlikely as the first approach. I suspect the answer comes from the rounding the pollsters apply to their raw results. This impacts on the normality of the distribution of poll results. In the Chi-squared test, I did not look at rounding.

### So what went wrong?

There are really two questions here:
• Why were the polls under-dispersed; and
• On the day, why did the election result differ from the sixteen prior poll estimates?

To be honest, it is too early to tell with any certainty, for both questions. But we are starting to see statements from the pollsters that suggest where some of the problems may lie.

A first issue seems to be the increased use of online polls. There are a few issues here:
• Finding a random sample where all Australians have an equal chance of being polled - there have been suggestions of too many educated and politically active people are in the online samples.
• Resampling the same individuals from time to time - meaning the samples are not independent. (This may explain the lack of noise we see in polls in recent years). If your sample is not representative, and it is used often, then all of your poll results would be skewed.
• An over-reliance on clever analytics and weights to try and make a pool of online respondents look like the broader population.  These weights are challenging to keep accurate and reliable over time.
More generally, regardless of the polling methodology:
• the use of weighting, where some groups are under-represented in the raw sample frame can mean that sample errors get magnified.
• not having quotas and weights for all the factors that align somewhat with cohort political differences can mean polls accidentally do not sample important constituencies.

Like Kevin Bonham, I am not a fan of the following theories
• Shy Tory voters - too embarrassed to tell pollsters of their secret intention to vote for the Coalition.
• A late swing after the last poll.

### Code snippet

To be transparent about how I approached this task, the python code snippet follows.
import pandas as pd
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt

import sys
sys.path.append( '../bin' )
plt.style.use('../bin/markgraph.mplstyle')

# --- Raw data
sample_sizes = (
pd.Series([3008, 1000, 1842, 1201, 1265, 1644, 1079, 826,
2003, 1207, 1000, 826, 2136, 1012, 707, 1697]))
measurements = ( # for Labor:
pd.Series([51.5, 51,   51,   51.5, 52,   51,   52,   51,
51,   52,   51,   51,  51,   52,   51,  52]))
roundings =   (
pd.Series([0.25, 0.5,  0.5,  0.25, 0.5,  0.5,  0.5,  0.5,
0.5,  0.5,  0.5,  0.5, 0.5,  0.5,  0.5, 0.5]))

# some pre-processing
Mean_Labor = measurements.mean()
Mean_Coalition = 100 - Mean_Labor
variances = (measurements * (100-measurements)) / sample_sizes
standard_deviations = pd.Series(np.sqrt(variances)) # sigma

print('Mean measurement: ', Mean_Labor)
print('Measurement counts:\n', measurements.value_counts())
print('Sample size range from/to: ', sample_sizes.min(),
sample_sizes.max())
print('Mean sample size: ', sample_sizes.mean())

# --- Using normal distributions
print('-----------------------------------------------------------')
individual_probs = []
for sd, r in zip(standard_deviations.tolist(), roundings):
individual_probs.append(stats.norm(Mean_Coalition, sd).cdf(49.0 + r) -
stats.norm(Mean_Coalition, sd).cdf(48.0 - r))

# print individual probabilities for each poll
print('Individual probabilities: ', individual_probs)

# product of all probabilities to calculate combined probability
probability = pd.Series(individual_probs).product()
print('Overall probability: ', probability)
print('1/Probability: ', 1/probability)

# --- Chi Squared - check normally distributed - two tailed test
print('-----------------------------------------------------------')
dof = len(measurements) - 1 ### degrees of freedom
print('Degrees of freedom: ', dof)
X = pow((measurements - Mean_Labor)/standard_deviations, 2).sum()
X_min = stats.distributions.chi2.ppf(0.005, df=dof)
X_max = stats.distributions.chi2.ppf(0.995, df=dof)
print('Expected X^2 between: ', round(X_min, 2), ' and ', round(X_max, 2))
print('X^2 statistic: ', X)
X_probability = stats.chi2.cdf(X , dof)
print('Probability: ', X_probability)
print('1/Probability: ', 1 / X_probability)

# --- Chi-squared plot
print('-----------------------------------------------------------')
x = np.linspace(0, X_min + X_max, 250)
y = pd.Series(stats.chi2(dof).pdf(x), index=x)

ax = y.plot()
ax.set_title('$\chi^2$ Distribution: degrees of freedom='+str(dof))
ax.axvline(X_min, color='royalblue')
ax.axvline(X_max, color='royalblue')
ax.axvline(X, color='orange')
ax.text(x=(X_min+X_max)/2, y=0.00, s='99% between '+str(round(X_min, 2))+
' and '+str(round(X_max, 2)), ha='center', va='bottom')
ax.text(x=X, y=0.01, s='$\chi^2 = '+str(round(X, 2))+'$',
ha='right', va='bottom', rotation=90)

ax.set_xlabel('$\chi^2$')
ax.set_ylabel('Probability')

fig = ax.figure
fig.set_size_inches(8, 4)
fig.text(0.99, 0.0025, 'marktheballot.blogspot.com.au',
ha='right', va='bottom', fontsize='x-small',
fontstyle='italic', color='#999999')
fig.savefig('./Graphs/Chi-squared.png', dpi=125)
plt.close()

# --- some normal plots
print('-----------------------------------------------------------')
mu = Mean_Coalition

n = 707
low = 47.5
high = 49.5

sigma = np.sqrt((Mean_Labor * Mean_Coalition) / n)
x = np.linspace(mu - 4*sigma, mu + 4*sigma, 200)
y = pd.Series(stats.norm.pdf(x, mu, sigma), index=x)

ax = y.plot()
ax.set_title('Distribution of samples: n='+str(n)+', μ='+
str(mu)+', σ='+str(round(sigma,2)))
ax.axvline(low, color='royalblue')
ax.axvline(high, color='royalblue')
ax.text(x=low-0.5, y=0.05, s=str(round(stats.norm.cdf(low,
loc=mu, scale=sigma)*100.0,1))+'%', ha='right', va='center')
ax.text(x=high+0.5, y=0.05, s=str(round((1-stats.norm.cdf(high,
loc=mu, scale=sigma))*100.0,1))+'%', ha='left', va='center')
mid = str( round(( stats.norm.cdf(high, loc=mu, scale=sigma) -
stats.norm.cdf(low, loc=mu, scale=sigma) )*100.0, 1) )+'%'
ax.text(x=48.5, y=0.05, s=mid, ha='center', va='center')

ax.set_xlabel('Per cent')
ax.set_ylabel('Probability')

fig = ax.figure
fig.set_size_inches(8, 4)
fig.text(0.99, 0.0025, 'marktheballot.blogspot.com.au',
ha='right', va='bottom', fontsize='x-small',
fontstyle='italic', color='#999999')
fig.savefig('./Graphs/'+str(n)+'.png', dpi=125)
plt.close()

# ---
n = 3008
low = 47.75
high = 49.25

sigma = np.sqrt((Mean_Labor * Mean_Coalition) / n)
x = np.linspace(mu - 4*sigma, mu + 4*sigma, 200)
y = pd.Series(stats.norm.pdf(x, mu, sigma), index=x)

ax = y.plot()
ax.set_title('Distribution of samples: n='+str(n)+', μ='+
str(mu)+', σ='+str(round(sigma,2)))
ax.axvline(low, color='royalblue')
ax.axvline(high, color='royalblue')
ax.text(x=low-0.25, y=0.3, s=str(round(stats.norm.cdf(low,
loc=mu, scale=sigma)*100.0,1))+'%', ha='right', va='center')
ax.text(x=high+0.25, y=0.3, s=str(round((1-stats.norm.cdf(high,
loc=mu, scale=sigma))*100.0,1))+'%', ha='left', va='center')
mid = str( round(( stats.norm.cdf(high, loc=mu, scale=sigma) -
stats.norm.cdf(low, loc=mu, scale=sigma) )*100.0, 1) )+'%'
ax.text(x=48.5, y=0.3, s=mid, ha='center', va='center')

ax.set_xlabel('Per cent')
ax.set_ylabel('Probability')

fig = ax.figure
fig.set_size_inches(8, 4)
fig.text(0.99, 0.0025, 'marktheballot.blogspot.com.au',
ha='right', va='bottom', fontsize='x-small',
fontstyle='italic', color='#999999')
fig.savefig('./Graphs/'+str(n)+'.png', dpi=125)
plt.close()


## Sunday, May 19, 2019

### A polling failure and a betting failure

Well, that went bad for the pollsters. Every poll published during the election campaign got it wrong. Collectively the polls suggested Labor would win around 51.5 per cent of the two-party preferred vote; at this stage in the count, it looks more like 49 per cent for Labor to the Coalition's 51 per cent.

I am as surprised as most. While it was obvious that the pollsters were doing something that reduced polling noise (and hopefully increased the polling signal), I assumed they knew what they were doing. What I really wanted was for the pollsters to tell us (the consumers of their information) how it was made: because it ain't what it says on the tin.

The 16 published polls since the commencement of the election campaign did not have the numerical features a statistician would expect from independent, representative and randomly sampled opinion polls. They did not look normally distributed around a population mean (even one that may have been moving over time). In short, the polls were under-dispersed.

I was troubled by the under-dispersion in the polls (here, here, and here), and I knew this could increase the risk of a polling failure. But I was not expecting a massive failure as such. Consistent with the polls, I thought the most likely outcome was a Labor victory in the order of 80 seats (plus or minus a few), with the Coalition to pick up around 65 and for others to land around 6 seats (80-65-6). The final result could end up being closer to 68-77-6. While a polling failure was possible, perhaps even 30 per cent likely, I did not think it the most likely outcome. Let's chalk it up to living in the Canberra bubble and confirmation bias.

I was also a little annoyed. The Bayesian aggregation technique I use makes the most use of the data at either end of the normal distribution around the population mean. Yet this data was implausibly missing on the public record. You don't need an aggregator when every poll result is in the range 48-49 to 51-52. There is nothing needing clarity on those results.

Because I assumed the pollsters were smoothing their own polls, I wondered what raw results they were actually seeing. Compared with February and March (Coalition on 47 per cent in round terms), the collective April and May poll results were substantially different (48.5 per cent). It is almost as if the public's mood shifted one and a half percentage points overnight with the 2 April Morrison Budget (and I am a long-standing sceptic about the capacity for Budgets to shift public opinion). To smooth so quickly to a substantially different number seemed unusual and analytically complicated. I wondered a number of times whether the pollsters had seen a 50 or a 51 or even a 52 for the Coalition in their raw data before smoothing (indeed, thinking about the missing inliers and outliers was how I got to being troubled by the polls).

What next: Something has to change. Like the United Kingdom, which had a similar scale polling failure with its 2015 general election, we need an inquiry into what went wrong. We also need way more transparency. Pollsters need to explain their methodology better and publish more on the pre-publication processing they undertake.

At least the myth of bookmakers knowing best has been put to bed. The bookmakers had a bad day too: especially Sportsbet, which had paid out early on a Labor win.

#### Postscript

Thanks to the Poll Bludger for the recognition. And some further reflections at Poll Bludger.

It is nice to see that my questioning of the under-dispersed in the polls means that I am now labelled a hardcore psephologist (albeit before the election).

The postmortem at freerangestats.info is worth reading.

A great election postmortem by Kevin Bonham.

The mathematics does not lie: why polling got the Australian election wrong, By Brian Schmidt.

## Saturday, May 18, 2019

### Pre-polling in 2019

We have had a substantial pre-poll turn-out at the 2019 election.

The cope snippet for this chart follows.

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sys
sys.path.append( '../bin' )
plt.style.use('../bin/markgraph.mplstyle')

pp2010 = './Data/e2010-prepoll-stats-19-08.csv'
pp2013 = './Data/e2013-prepoll-stats-07-09.csv'
pp2016 = './Data/20160702_WEB_Prepoll_Report.csv'
pp2019 = './Data/20190518_WEB_Pre-poll_Report_FE2019.csv'

# --- build a comparative table from the AEC files
elections = ['2010-08-21', '2013-09-07', '2016-07-02', '2019-05-18']
years = [e[0:4] for e in elections]
files = [globals()[y] for y in ['pp' + x for x in years]]

for (y, f, e) in zip (years, files, elections):
print(y)

# - delete n initial columns - sum to daily totals - calculate index
if y == '2010':
n = 2
elif y in ['2013', '2016']:
n = 3
elif y == '2019':
n = 4
s = df.drop(labels=df.columns[0:n], axis=1).sum()
s.index = pd.to_datetime(arg=s.index.values, dayfirst=True)
s.index = s.index - np.datetime64(e, 'D')

# - build up the comparative table for each election
if y == '2010':
table = pd.DataFrame([s], index=[y]).T
else:
table = table.reindex(index=pd.Index.union(table.index, s.index))
table[y] = s

# --- tidy up - present as cumsum - metric is millions - make index an int
table = (table / 1_000_000).fillna(0).cumsum()
table.index = (table.index.values / np.timedelta64(1, 'D')).astype(int)

# --- plot the comparatve table
ax = table.plot()
ax.set_title('Cumulative Pre-Poll Numbers')
ax.set_xlabel('Days prior to the Election')
ax.set_ylabel('Millions pre-polled')
fig = ax.figure
fig.set_size_inches(8, 4)
fig.text(0.99, 0.0025, 'marktheballot.blogspot.com.au',
ha='right', va='bottom', fontsize='x-small',
fontstyle='italic', color='#999999')
fig.savefig('./Graphs/Pre-poll.png', dpi=125)
plt.close()


### A last fleeting look at the betting markets

At 8.10am on the morning of the election, the pollsters have Labor as the clear favourite.

House Coalition Odds ($) Labor Odds ($) Coalition Win Probability (%)
2019-05-18 BetEasy 6.00 1.13 15.85
2019-05-18 Sportsbet 5.75 1.14 16.55

Turning to the individual seat markets at Sportsbet, the summary charts follow. These are consistent with a TPP vote for Labor around 51.5 per cent.

For the individual seat odds, the implied majority government formation is 96.5 per cent. The Coalition has no chance of forming majority governmnet.

The 151 time-series charts for each seat follow. Because the bookmakers have such humongous over-rounds with their individual seat odds, I take a fairly savage approach with the long-shot odds.

• For odds between $1.01 and$1.02 (bookmaker raw probabilities between 98 and 99 per cent), I have treated the seat as having a probability of 100 per cent for the favourite.
• For odds between $1.03 and$1.50 (bookmaker raw probabilities between 66.7 and 97 per cent), I have done all of the normalisation for the bookmaker's over-round in terms of the non-favourite parties.
• For odds in excess of $9.99, I have simply ignored them. Note: I have set the odds for Katter in Kennedy to$1.05, as this betting option was suspended.

Note: betting on Katter in Kennedy was suspended. Estimate based on Labor odds.

The key odds that inform the above charts follow.

 Labor Coalition Liberal (Coalition) National (Coalition) Greens Independent Centre Alliance Katter's Australian Party Shooters, Fishers and Farmers Adelaide (SA) 1.08 6 31 Aston (VIC) 4.4 1.2 41 Ballarat (VIC) 1.01 11 34 21 Banks (NSW) 2.9 1.35 16 Barker (SA) 12 1.03 51 9 Barton (NSW) 1.01 11 21 Bass (TAS) 1.38 2.75 21 Bean (ACT) 1.03 10 31 31 Bendigo (VIC) 1.04 8.5 41 Bennelong (NSW) 8 1.05 21 Berowra (NSW) 11 1.01 21 31 Blair (QLD) 1.1 10 Blaxland (NSW) 1.01 11 61 Bonner (QLD) 3.2 1.3 61 Boothby (SA) 3 1.3 21 26 Bowman (QLD) 4.75 1.15 31 Braddon (TAS) 1.9 1.8 61 10 Bradfield (NSW) 11 1.01 41 Brand (WA) 1.08 7 31 Brisbane (QLD) 3.2 1.35 8 Bruce (VIC) 1.01 11 51 Burt (WA) 1.01 11 21 26 9.5 Calare (NSW) 12 1.1 31 5.5 Calwell (VIC) 1.01 11 26 Canberra (ACT) 1.01 12 11 26 Canning (WA) 5.5 1.1 26 Capricornia (QLD) 1.85 1.85 26 Casey (VIC) 2.85 1.4 36 21 Chifley (NSW) 1.01 11 21 31 Chisholm (VIC) 1.35 3 21 Clark (TAS) 11 21 31 1.01 Cook (NSW) 11 1.01 34 Cooper (VIC) 1.14 31 4.5 41 Corangamite (VIC) 1.32 3.1 26 46 Corio (VIC) 1.01 11 34 Cowan (WA) 1.16 4.5 31 Cowper (NSW) 21 1.93 51 1.72 Cunningham (NSW) 1.01 11 16 Curtin (WA) 14 1.1 31 5.5 Dawson (QLD) 2.2 1.6 51 8 Deakin (VIC) 2 1.77 31 31 Dickson (QLD) 1.55 2.35 31 Dobell (NSW) 1.06 7.5 21 31 Dunkley (VIC) 1.1 5.75 26 Durack (WA) 7.5 1.06 31 Eden-Monaro (NSW) 1.05 8 21 12 Fadden (QLD) 8 1.05 31 Fairfax (QLD) 11 1.01 21 21 Farrer (NSW) 31 1.85 1.8 Fenner (ACT) 1.01 11 41 Fisher (QLD) 6.5 1.08 34 Flinders (VIC) 2.4 1.5 31 16 Flynn (QLD) 2.2 1.6 21 18 Forde (QLD) 1.35 3 51 Forrest (WA) 11 1.01 21 31 26 Fowler (NSW) 1.01 12 14 Franklin (TAS) 1.01 11 31 Fraser (VIC) 1.01 11 41 21 Fremantle (WA) 1.01 11 21 Gellibrand (VIC) 1.01 61 11 Gilmore (NSW) 1.3 3 12 31 Gippsland (VIC) 21 1.01 51 31 11 Goldstein (VIC) 9 1.03 51 24 Gorton (VIC) 1.01 26 11 21 Grayndler (NSW) 1.01 51 11 Greenway (NSW) 1.01 11 16 Grey (SA) 5.5 1.14 21 26 10 Griffith (QLD) 1.1 21 6 Groom (QLD) 11 1.01 Hasluck (WA) 1.33 2.9 21 31 Herbert (QLD) 3.3 1.26 36 26 Higgins (VIC) 3.5 1.72 2.5 Hindmarsh(SA) 1.01 11 31 Hinkler (QLD) 7.5 1.07 18 Holt (VIC) 1.01 11 34 Hotham (VIC) 1.01 11 21 Hughes (NSW) 4.3 1.18 16 31 Hume (NSW) 6 1.12 21 31 Hunter (NSW) 1.01 11 21 Indi (VIC) 16 1.6 12 2.2 Isaacs (VIC) 1.01 11 31 Jagajaga (VIC) 1.02 10 26 Kennedy (QLD) 10 8 41 1.05 Kingsford Smith (NSW) 1.01 11 31 Kingston (SA) 1.01 11 26 Kooyong (VIC) 12 1.18 4 21 La Trobe (VIC) 1.33 3 34 Lalor (VIC) 1.01 11 41 Leichhardt (QLD) 3.1 1.32 31 6.5 Lilley (QLD) 1.01 11 21 Lindsay (NSW) 2.02 1.64 21 Lingiari (NT) 1.28 3.2 26 36 Longman (QLD) 1.22 3.5 41 Lyne (NSW) 11 1.01 21 Lyons (TAS) 1.02 9 32 Macarthur (NSW) 1.01 11 31 Mackellar (NSW) 26 1.01 21 11 Macnamara (VIC) 1.5 12 2.55 51 Macquarie (NSW) 1.05 8 26 Makin (SA) 1.01 11 41 Mallee (VIC) 12 1.2 51 4 10 Maranoa (QLD) 21 1.1 10 Maribyrnong (VIC) 1.01 11 61 Mayo (SA) 31 6.5 41 1.08 McEwen (VIC) 1.01 11 41 21 McMahon (NSW) 1.01 11 21 McPherson (QLD) 11 1.01 21 21 Melbourne (VIC) 11 16 1.01 51 Menzies (VIC) 5 1.14 41 16 Mitchell (NSW) 11 1.01 21 Monash (VIC) 3.5 1.29 26 21 Moncrieff (QLD) 11 1.01 31 Moore (WA) 6 1.1 21 31 Moreton (QLD) 1.04 8.5 31 New England (NSW) 11 1.01 61 16 Newcastle (NSW) 1.01 12 21 Nicholls (VIC) 11 1.01 61 26 North Sydney (NSW) 11 1.01 18 31 O'Connor (WA) 6.5 1.1 26 Oxley (QLD) 1.01 11 18 Page (NSW) 2.3 1.5 31 12 Parkes (NSW) 5.5 1.1 12 31 Parramatta (NSW) 1.07 4.5 41 Paterson (NSW) 1.01 11 41 Pearce (WA) 2.4 1.5 21 26 15 Perth (WA) 1.07 7 26 Petrie (QLD) 1.66 2 36 Rankin (QLD) 1.01 11 21 Reid (NSW) 1.8 1.9 31 Richmond (NSW) 1.1 12 6 51 Riverina (NSW) 11 1.01 41 Robertson (NSW) 1.38 2.8 31 41 Ryan (QLD) 4 1.18 7 Scullin (VIC) 1.01 11 41 31 Shortland (NSW) 1.01 11 31 Solomon (NT) 1.35 2.85 31 26 Spence (SA) 1.01 11 21 41 Stirling (WA) 1.72 2 31 Sturt (SA) 6.5 1.1 31 26 Swan (WA) 1.55 2.35 26 Sydney (NSW) 1.01 16 11 Tangney (WA) 11 1.01 26 31 Wannon (VIC) 8 1.05 21 31 Warringah (NSW) 51 2.65 41 1.4 Watson (NSW) 1.01 11 21 Wentworth (NSW) 41 1.22 21 4 Werriwa (NSW) 1.01 12 31 Whitlam (NSW) 1.01 12 Wide Bay(QLD) 5.5 1.11 26 Wills (VIC) 1.17 51 4.5 Wright (QLD) 6.5 1.08 26