Pages

Saturday, June 29, 2019

Three anchored models

I have three anchored models for the period 2 July 2016 to 18 May 2019. The first is anchored to the 2016 election result (left anchored). The second model is anchored to the 2019 election result (right anchored). The third model is anchored to both election results (left and right anchored).  Let's look at these models.






The first thing to note is that the median lines in the left-anchored and right-anchored models are very similar. It is pretty much the same line moved up or down by 1.4 percentage points. As we have discussed previously, this difference of 1.4 percentage points is effectively a drift in the collective polling house effects over the period from 2016 to 2019. The polls opened after the 2016 election with a collective 1.7 percentage point pro-Labor bias. This bias grew by a further 1.4 percentage points to reach 3.1 percentage points at the time of the 2019 election (the difference between the yellow line and the blue/green lines on the right hand side of the last chart above).

The third model: the left-and-right anchored model forces this drift to be reconciled within the model (but without any guidance from the model). The left-and-right anchored model explicitly assumes there is no such drift (ie. house effects are constant and unchanging). In handling this unspecified drift, the left-and-right anchored model has seen much of the adjustment occur close to the two anchor points at the left and right extremes of the chart. The shape of the middle of the chart is not dissimilar to the singularly anchored charts.

While this is the output for the left-and-right anchored model, I would advise caution in assuming that the drift in polling house effects actually occurred in the period immediately after the 2016 election and immediately prior to the 2019 election. It is just that this is the best mathematical fit for a model that assumes there has been no drift. The actual drift could have happened slowly over the entire period, or quickly at the beginning, somewhere in the middle, or towards the end of the three year period.

My results for the left-and-right anchored model are not dissimilar to Jackman and Mansillo. The differences between our charts are largely a result of how I treat the day-to-day variance in voting intention (particularly following the polling discontinuity associated with the leadership transition from Turnbull to Morrison). I chose to specify this variance, rather than model it as a hyper-prior.  I specified this parameter because: (a) we can observe higher volatility immediately following discontinuity events, and (b) the sparse polling results in Australia, especially in the 2016-19 period, produces an under-estimate for this variance in this model.

All three models have a very similar result for the discontinuity event itself: an impact just under three percentage points. Note: these charts are not in percentage points, but vote shares.




And just to complete the analysis, let's look at the house effects. With all of these houses effects, I would urge caution. These house effects are an artefact of the best fit in models that do not allow for the 1.4 percentage point drift in collective house effects that occurred between 2016 and 2019.




The three models are almost identical.
// STAN: Two-Party Preferred (TPP) Vote Intention Model
//     - Fixed starting-point

data {
    // data size
    int n_polls;
    int n_days;
    int n_houses;

    // assumed standard deviation for all polls
    real pseudoSampleSigma;

    // poll data
    vector[n_polls] y; // TPP vote share
    int house[n_polls];
    int day[n_polls];

    // period of discontinuity event
    int discontinuity;
    int stability;

    // election outcome anchor point
    real start_anchor;
}

transformed data {
    // fixed day-to-day standard deviation
    real sigma = 0.0015;
    real sigma_volatile = 0.0045;

    // house effect range
    real lowerHE = -0.07;
    real upperHE = 0.07;

    // tightness of anchor points
    real tight_fit = 0.0001;
}

parameters {
    vector[n_days] hidden_vote_share;
    vector[n_houses] pHouseEffects;
    real disruption;
}

model {
    // -- temporal model [this is the hidden state-space model]
    disruption ~ normal(0.0, 0.15); // PRIOR
    hidden_vote_share[1] ~ normal(start_anchor, tight_fit); // ANCHOR

    hidden_vote_share[2:(discontinuity-1)] ~
        normal(hidden_vote_share[1:(discontinuity-2)], sigma);

    hidden_vote_share[discontinuity] ~
        normal(hidden_vote_share[discontinuity-1]+disruption, sigma);

    hidden_vote_share[(discontinuity+1):stability] ~
        normal(hidden_vote_share[discontinuity:(stability-1)], sigma_volatile);

    hidden_vote_share[(stability+1):n_days] ~
        normal(hidden_vote_share[stability:(n_days-1)], sigma);

    // -- house effects model - uniform distributions
    pHouseEffects ~ uniform(lowerHE, upperHE); // PRIOR

    // -- observed data / measurement model
    y ~ normal(pHouseEffects[house] + hidden_vote_share[day],
        pseudoSampleSigma);
}

// STAN: Two-Party Preferred (TPP) Vote Intention Model
//     - Fixed end-point only

data {
    // data size
    int n_polls;
    int n_days;
    int n_houses;

    // assumed standard deviation for all polls
    real pseudoSampleSigma;

    // poll data
    vector[n_polls] y; // TPP vote share
    int house[n_polls];
    int day[n_polls];

    // period of discontinuity event
    int discontinuity;
    int stability;

    // election outcome anchor point
    real end_anchor;
}

transformed data {
    // fixed day-to-day standard deviation
    real sigma = 0.0015;
    real sigma_volatile = 0.0045;

    // house effect range
    real lowerHE = -0.07;
    real upperHE = 0.07;

    // tightness of anchor points
    real tight_fit = 0.0001;
}

parameters {
    vector[n_days] hidden_vote_share;
    vector[n_houses] pHouseEffects;
    real disruption;
}

model {
    // -- temporal model [this is the hidden state-space model]
    disruption ~ normal(0.0, 0.15); // PRIOR
    hidden_vote_share[1] ~ normal(0.5, 0.15); // PRIOR

    hidden_vote_share[2:(discontinuity-1)] ~
        normal(hidden_vote_share[1:(discontinuity-2)], sigma);

    hidden_vote_share[discontinuity] ~
        normal(hidden_vote_share[discontinuity-1]+disruption, sigma);

    hidden_vote_share[(discontinuity+1):stability] ~
        normal(hidden_vote_share[discontinuity:(stability-1)], sigma_volatile);

    hidden_vote_share[(stability+1):n_days] ~
        normal(hidden_vote_share[stability:(n_days-1)], sigma);

    // -- house effects model - uniform distributions
    pHouseEffects ~ uniform(lowerHE, upperHE); // PRIOR

    // -- observed data / measurement model
    y ~ normal(pHouseEffects[house] + hidden_vote_share[day],pseudoSampleSigma);
    end_anchor ~ normal(hidden_vote_share[n_days], tight_fit); //ANCHOR
}

// STAN: Two-Party Preferred (TPP) Vote Intention Model
//     - Fixed starting-point and end-point

data {
    // data size
    int n_polls;
    int n_days;
    int n_houses;

    // assumed standard deviation for all polls
    real pseudoSampleSigma;

    // poll data
    vector[n_polls] y; // TPP vote share
    int house[n_polls];
    int day[n_polls];

    // period of discontinuity event
    int discontinuity;
    int stability;

    // election outcome anchor point
    real start_anchor;
    real end_anchor;
}

transformed data {
    // fixed day-to-day standard deviation
    real sigma = 0.0015;
    real sigma_volatile = 0.0045;

    // house effect range
    real lowerHE = -0.07;
    real upperHE = 0.07;

    // tightness of anchor points
    real tight_fit = 0.0001;
}

parameters {
    vector[n_days] hidden_vote_share;
    vector[n_houses] pHouseEffects;
    real disruption;
}

model {
    // -- temporal model [this is the hidden state-space model]
    disruption ~ normal(0.0, 0.15); // PRIOR
    hidden_vote_share[1] ~ normal(start_anchor, tight_fit); // ANCHOR

    hidden_vote_share[2:(discontinuity-1)] ~
        normal(hidden_vote_share[1:(discontinuity-2)], sigma);

    hidden_vote_share[discontinuity] ~
        normal(hidden_vote_share[discontinuity-1]+disruption, sigma);

    hidden_vote_share[(discontinuity+1):stability] ~
        normal(hidden_vote_share[discontinuity:(stability-1)], sigma_volatile);

    hidden_vote_share[(stability+1):n_days] ~
        normal(hidden_vote_share[stability:(n_days-1)], sigma);

    // -- house effects model - uniform distributions
    pHouseEffects ~ uniform(lowerHE, upperHE); // PRIOR

    // -- observed data / measurement model
    y ~ normal(pHouseEffects[house] + hidden_vote_share[day],
        pseudoSampleSigma);
    end_anchor ~ normal(hidden_vote_share[n_days], tight_fit); //ANCHOR
}

Update: Kevin Bonham is also exploring what public voting intention might have looked like during the 2016-19 period.

Tuesday, June 18, 2019

Further polling reflections

I have been pondering on whether the polls have been out of whack for some time, or whether it was a recent failure (over the previous 3, 6 or (say) 12 months). In previous posts, I looked at YouGov in 2017, and at monthly polling averages prior to the 2019 election.

Today I want to look at the initial polls following the 2016 election. First, however, let's recap the model I used for the 2019 election. In this model, I excluded YouGov and Roy Morgan from the sum-to-zero constraint on house effects. I have added a starting point reference to these charts (and increased the rounding on the labels from one decimal place to two. However, I would caution on reading these models to two decimal places, the models are not that precise).


What is worth noting is that this series opens on 6 July 2016 some 1.7 percentage points down from the election result of 50.36 per cent of the two-party preferred (TPP) vote for the Coalition on 2 July 2016. The series closes some 3.1 percentage points down from the 18 May 2019 election result. It appears that the core-set of Australian pollsters started some 1.7 percentage points off the mark, and collectively gained a further 1.4 percentage points of error over the period from July 2016 to May 2019.

These initial polls are all from Essential, and they are under-dispersed. (We discussed the under-dispersion problem here, here, here, and here. I will come back to this problem in a future post). The first two Newspolls were closer to the election result, but they then aligned with Essential from then on. The Newspolls from this period are also under-dispersed.

We can see how closely Newspoll and Essential tracked each other on average from the following chart of average house effects. I have Newspoll twice in this chart, based on the original method for allocating of preferences, and (Newspoll2) for the revised allocation of One Nation preferences from late in 2017.


If I had aggregated the polls prior to the 2019 election by anchoring the line to the previous election, I would have achieved a better estimate of the Coalition's performance than I did. Effectively I would have predicted a tie or a very narrow Coalition victory if I had aggregated the polls for this election with an anchor to the previous election.




A good question to ask at this point is why did I not anchor the model to the previous election? The short answer is that I have watched a number of aggregators in past election cycles use an anchored model and end up with worse predictions than those who assumed the house effects across the pollsters cancel each other out on average. I have also assumed that pollsters use elections to recalibrate their polling methodologies, and this recalibration represents a series break. A left-hand side anchored series assumes there have been no series breaks.

In summary, at least 1.7 percentage points of polling error were baked in from the very first polls following the 2016 election. Over the period since July 2016, this error has increased to 3.1 percentage points.

Wonky note: For the anchored model, I changed the priors on house effects from weakly informative normals centred on zero, to uniform priors in the range -6% to +6%. I did this because the weakly informative priors were dragging the aggregation towards the centre of the data points.

The anchored STAN model code follows.
// STAN: Two-Party Preferred (TPP) Vote Intention Model 
//     - Updated to for fixed starting point

data {
    // data size
    int n_polls;
    int n_days;
    int n_houses;
    
    // assumed standard deviation for all polls
    real pseudoSampleSigma;
    
    // poll data
    vector[n_polls] y; // TPP vote share
    int house[n_polls];
    int day[n_polls];
    //vector [n_polls] poll_qual_adj; // poll quality adjustment
    
    // period of discontinuity event
    int discontinuity;
    int stability;
    
    // previous election outcome anchor point
    real election_outcome;
}

transformed data {
    // fixed day-to-day standard deviation
    real sigma = 0.0015;
    real sigma_volatile = 0.0045;
    
    // house effect range
    real lowerHE = -0.06;
    real upperHE = 0.06;
}

parameters {
    vector[n_days] hidden_vote_share;
    vector[n_houses] pHouseEffects;
    real disruption;
}

model {
    // -- temporal model [this is the hidden state-space model]
    disruption ~ normal(0.0, 0.15); // PRIOR
    hidden_vote_share[1] ~ normal(election_outcome, 0.00001);
    
    hidden_vote_share[2:(discontinuity-1)] ~ 
        normal(hidden_vote_share[1:(discontinuity-2)], sigma);
                
    hidden_vote_share[discontinuity] ~ 
        normal(hidden_vote_share[discontinuity-1]+disruption, sigma); 

    hidden_vote_share[(discontinuity+1):stability] ~ 
        normal(hidden_vote_share[discontinuity:(stability-1)], sigma_volatile);

    hidden_vote_share[(stability+1):n_days] ~ 
        normal(hidden_vote_share[stability:(n_days-1)], sigma);
    
    // -- house effects model
    pHouseEffects ~ uniform(lowerHE, upperHE); // PRIOR 

    // -- observed data / measurement model
    y ~ normal(pHouseEffects[house] + hidden_vote_share[day], 
        pseudoSampleSigma);
}

Saturday, June 8, 2019

Was YouGov the winner of the 2016-19 polling season?

I have been wondering whether the pollsters have been off the mark for some years, or whether this is something that emerged recently (say, since Morrison's appointment as Prime Minister or since Christmas 2018). Today's exploration suggests the former: The pollsters have been off the mark for a number of years this electoral cycle.

Back in June 2017, international pollster YouGov appeared on the Australian polling scene with what looked like a fairly implausible set of poll results. The series was noisy, and well to the right of the other polling houses at the time. Back then, most pundits dismissed YouGov as a quaint curiosity.

Date Firm Primary % TPP %
L/NP ALP GRN ONP OTH L/NP ALP
7-10 Dec 2017 YouGov 34 35 11 8 13 50 50
23-27 Nov 2017 YouGov 32 32 10 11 16 47 53
14 Nov 2017 YouGov 31 34 11 11 14 48 52
14-18 Sep 2017 YouGov 34 35 11 9 11 50 50
31 Aug - 4 Sep 2017 YouGov 34 32 12 9 13 50 50
17-21 Aug 2017 YouGov 34 33 10 10 13 51 49
20-24 Jul 2017 YouGov 36 33 10 8 13 50 50
6-11 Jul 2017 YouGov 36 33 12 7 12 52 48
22-27 Jun 2017 YouGov 33 34 12 7 14 49 51

The 2017 YouGov series was short-lived. In December 2017, YouGov acquired Galaxy, which had acquired Newspoll in May 2015. YouGov ceased publishing poll results under its own brand. Newspoll continued without noticeable change. By the time of the 2019 election, these nine YouGov polls from 2017 had been long forgotten.

Today's thought experiment: What if those nine YouGov polls were correct (on average)? I can answer this question by changing the Bayesian aggregation model so that is centred on the YouGov polls, rather assuming the house affects across a core-set of pollsters sum to zero. Making this change yields a final poll aggregate of 51.3 per cent for the Coalition. This would have been remarkably prescient of the final 2019 election outcome (51.5 per cent).


The house effects in this model are as follows.


And if we adjust the poll results for the median house effects identified in the previous chart, we get a series like this.


YouGov is a reliable international polling house. It gets a B-grade from FiveThirtyEight. When it entered the Australian market in 2017, YouGov produced poll results that were on average up to 3 percentage points to the right of the other pollsters. The election in 2019 also produced a result that was around 3 percentage points to the right of the pollsters. That a respected international pollster can enter the Australian market and produce this result in 2017, suggests our regular Australian pollsters may have been missing the mark for quite some time.

Note: as usual, the above poll results are sourced from Wikipedia.

Sunday, June 2, 2019

More random reflections on the 2019 polls

Over the next few months, I will post some random reflections on the polls prior to the 2019 Election, and what went wrong. Today's post is a look at the two-party preferred (TPP) poll results over the past 12 months (well from 1 May 2018 to be precise). I am interested in the underlying patterns: both the periods of polling stability and when the polls changed.

With blue lines in the chart below, I have highlighted four periods when the polls look relatively stable. The first period is the last few months of the Turnbull premiership. The second period is Morrison's new premiership for the remainder of 2018. The third period is the first three months (and ten days in April) of 2019, prior to the election being called. The fourth and final period is from the dissolution of Parliament to the election. What intrigues me is the relative polling stability during each of these periods, and the marked jumps in voting intention (often over a couple of weeks) between these periods of stability.

To provide another perspective, I have plotted in red the calendar month polling averages. For the most part, these monthly averages stay close to the four period-averages I identified.


The only step change that I can clearly explain is the change from Turnbull to Morrison (immediately preceded by the Dutton challenge to Turnbull's leadership). This step change is emblematic of one of the famous aphorisms of Australian politics: disunity is death.

It is ironic to note that the highest monthly average for the year was 48.8 per cent in July 2018 under Turnbull. It is intriguing to wonder whether the polls were as out of whack in July 2018 as they were in May 2019 (when they collectively failed to foreshadow a Coalition TPP vote share at the 2019 election in excess of 51.5 per cent). Was Turnbull toppled for electability issues when he actually had 52 per cent of the TPP vote share?

The next step change that might be partially explainable is the last one: chronologically, it is associated with the April 2 Budget followed by the calling of the election on 11 April 2019. The Budget was a classic pre-election Budget (largess without nasties), and calling the election focuses the mind of the electorate on the outcome. However, I really do not find this explanation satisfying. Budgets are very technical documents, and people usually only understand the costs and benefits when they actually experience them. Nothing in the Budget was implemented prior to the election being called.

I am at a loss to explain the step change over the Christmas/New-Year period at the end of 2018 and the start of 2019. It was clearly a summer of increasing content with the government.

I am also intrigued by the question of whether the polls have been consistently wrong over this one-year period, or whether the polls have increasingly deviated from the population voting intention as they failed to fully comprehend Morrison's improved polling position over recent months.

Note: as usual I am relying on Wikipedia for the Australian opinion polling data.