The first thing to note is that the median lines in the left-anchored and right-anchored models are very similar. It is pretty much the same line moved up or down by 1.4 percentage points. As we have discussed previously, this difference of 1.4 percentage points is effectively a drift in the collective polling house effects over the period from 2016 to 2019. The polls opened after the 2016 election with a collective 1.7 percentage point pro-Labor bias. This bias grew by a further 1.4 percentage points to reach 3.1 percentage points at the time of the 2019 election (the difference between the yellow line and the blue/green lines on the right hand side of the last chart above).
The third model: the left-and-right anchored model forces this drift to be reconciled within the model (but without any guidance from the model). The left-and-right anchored model explicitly assumes there is no such drift (ie. house effects are constant and unchanging). In handling this unspecified drift, the left-and-right anchored model has seen much of the adjustment occur close to the two anchor points at the left and right extremes of the chart. The shape of the middle of the chart is not dissimilar to the singularly anchored charts.
While this is the output for the left-and-right anchored model, I would advise caution in assuming that the drift in polling house effects actually occurred in the period immediately after the 2016 election and immediately prior to the 2019 election. It is just that this is the best mathematical fit for a model that assumes there has been no drift. The actual drift could have happened slowly over the entire period, or quickly at the beginning, somewhere in the middle, or towards the end of the three year period.
My results for the left-and-right anchored model are not dissimilar to Jackman and Mansillo. The differences between our charts are largely a result of how I treat the day-to-day variance in voting intention (particularly following the polling discontinuity associated with the leadership transition from Turnbull to Morrison). I chose to specify this variance, rather than model it as a hyper-prior. I specified this parameter because: (a) we can observe higher volatility immediately following discontinuity events, and (b) the sparse polling results in Australia, especially in the 2016-19 period, produces an under-estimate for this variance in this model.
All three models have a very similar result for the discontinuity event itself: an impact just under three percentage points. Note: these charts are not in percentage points, but vote shares.
And just to complete the analysis, let's look at the house effects. With all of these houses effects, I would urge caution. These house effects are an artefact of the best fit in models that do not allow for the 1.4 percentage point drift in collective house effects that occurred between 2016 and 2019.
The three models are almost identical.
// STAN: Two-Party Preferred (TPP) Vote Intention Model // - Fixed starting-point data { // data size int n_polls; int n_days; int n_houses; // assumed standard deviation for all polls real pseudoSampleSigma; // poll data vector[n_polls] y; // TPP vote share int house[n_polls]; int day[n_polls]; // period of discontinuity event int discontinuity; int stability; // election outcome anchor point real start_anchor; } transformed data { // fixed day-to-day standard deviation real sigma = 0.0015; real sigma_volatile = 0.0045; // house effect range real lowerHE = -0.07; real upperHE = 0.07; // tightness of anchor points real tight_fit = 0.0001; } parameters { vector[n_days] hidden_vote_share; vector[n_houses] pHouseEffects; real disruption; } model { // -- temporal model [this is the hidden state-space model] disruption ~ normal(0.0, 0.15); // PRIOR hidden_vote_share[1] ~ normal(start_anchor, tight_fit); // ANCHOR hidden_vote_share[2:(discontinuity-1)] ~ normal(hidden_vote_share[1:(discontinuity-2)], sigma); hidden_vote_share[discontinuity] ~ normal(hidden_vote_share[discontinuity-1]+disruption, sigma); hidden_vote_share[(discontinuity+1):stability] ~ normal(hidden_vote_share[discontinuity:(stability-1)], sigma_volatile); hidden_vote_share[(stability+1):n_days] ~ normal(hidden_vote_share[stability:(n_days-1)], sigma); // -- house effects model - uniform distributions pHouseEffects ~ uniform(lowerHE, upperHE); // PRIOR // -- observed data / measurement model y ~ normal(pHouseEffects[house] + hidden_vote_share[day], pseudoSampleSigma); }
// STAN: Two-Party Preferred (TPP) Vote Intention Model // - Fixed end-point only data { // data size int n_polls; int n_days; int n_houses; // assumed standard deviation for all polls real pseudoSampleSigma; // poll data vector[n_polls] y; // TPP vote share int house[n_polls]; int day[n_polls]; // period of discontinuity event int discontinuity; int stability; // election outcome anchor point real end_anchor; } transformed data { // fixed day-to-day standard deviation real sigma = 0.0015; real sigma_volatile = 0.0045; // house effect range real lowerHE = -0.07; real upperHE = 0.07; // tightness of anchor points real tight_fit = 0.0001; } parameters { vector[n_days] hidden_vote_share; vector[n_houses] pHouseEffects; real disruption; } model { // -- temporal model [this is the hidden state-space model] disruption ~ normal(0.0, 0.15); // PRIOR hidden_vote_share[1] ~ normal(0.5, 0.15); // PRIOR hidden_vote_share[2:(discontinuity-1)] ~ normal(hidden_vote_share[1:(discontinuity-2)], sigma); hidden_vote_share[discontinuity] ~ normal(hidden_vote_share[discontinuity-1]+disruption, sigma); hidden_vote_share[(discontinuity+1):stability] ~ normal(hidden_vote_share[discontinuity:(stability-1)], sigma_volatile); hidden_vote_share[(stability+1):n_days] ~ normal(hidden_vote_share[stability:(n_days-1)], sigma); // -- house effects model - uniform distributions pHouseEffects ~ uniform(lowerHE, upperHE); // PRIOR // -- observed data / measurement model y ~ normal(pHouseEffects[house] + hidden_vote_share[day],pseudoSampleSigma); end_anchor ~ normal(hidden_vote_share[n_days], tight_fit); //ANCHOR }
// STAN: Two-Party Preferred (TPP) Vote Intention Model // - Fixed starting-point and end-point data { // data size int n_polls; int n_days; int n_houses; // assumed standard deviation for all polls real pseudoSampleSigma; // poll data vector[n_polls] y; // TPP vote share int house[n_polls]; int day[n_polls]; // period of discontinuity event int discontinuity; int stability; // election outcome anchor point real start_anchor; real end_anchor; } transformed data { // fixed day-to-day standard deviation real sigma = 0.0015; real sigma_volatile = 0.0045; // house effect range real lowerHE = -0.07; real upperHE = 0.07; // tightness of anchor points real tight_fit = 0.0001; } parameters { vector[n_days] hidden_vote_share; vector[n_houses] pHouseEffects; real disruption; } model { // -- temporal model [this is the hidden state-space model] disruption ~ normal(0.0, 0.15); // PRIOR hidden_vote_share[1] ~ normal(start_anchor, tight_fit); // ANCHOR hidden_vote_share[2:(discontinuity-1)] ~ normal(hidden_vote_share[1:(discontinuity-2)], sigma); hidden_vote_share[discontinuity] ~ normal(hidden_vote_share[discontinuity-1]+disruption, sigma); hidden_vote_share[(discontinuity+1):stability] ~ normal(hidden_vote_share[discontinuity:(stability-1)], sigma_volatile); hidden_vote_share[(stability+1):n_days] ~ normal(hidden_vote_share[stability:(n_days-1)], sigma); // -- house effects model - uniform distributions pHouseEffects ~ uniform(lowerHE, upperHE); // PRIOR // -- observed data / measurement model y ~ normal(pHouseEffects[house] + hidden_vote_share[day], pseudoSampleSigma); end_anchor ~ normal(hidden_vote_share[n_days], tight_fit); //ANCHOR }
Update: Kevin Bonham is also exploring what public voting intention might have looked like during the 2016-19 period.
No comments:
Post a Comment