Pages

Thursday, May 23, 2019

Further analysis of poll variance

The stunning feature of the opinion polls leading up to the 2019 Federal Election is that they did not look like the statistics you would expect from independent, random-sample polls of the voting population. All sixteen polls were within one percentage point of each other. As I have indicated previously, this is much more tightly clustered than is mathematically plausible. This post explores that mathematics further.


My initial approach to testing for under-dispersion

One of the foundations of statistics is the notion that if I draw many independent and random samples from a population, the means of those many random samples will be normally distributed around the population mean (represented by the Greek letter mu \(\mu\)). This is known as the Central Limit Theorem or the Sampling Distribution of the Sample Mean. In practice, the Central Limit Theorem holds for samples of size 30 or higher.

The span or spread of the distribution of the many sample means around the population mean will depend on the size of those samples, which is usually denoted with a lower-case \(n\). Statisticians measure this spread through the standard deviation (which is usually denoted by the Greek letter sigma \(\sigma\)). With the two-party preferred voting data, the standard deviation for the sample proportions is given by the following formula:

$$\sigma = \sqrt{\frac{proportion_{CoalitionTPP} * proportion_{LaborTPP}}{n}}$$

While I have the sample sizes for most of the sixteen polls prior to the 2019 Election, I do not have the sample size for the final YouGov/Galaxy poll. Nor do I have the sample size for the Essential poll on 25–29 Apr 2019. For analytical purposes, I have assumed both surveys were of 1000 people. The sample sizes for the sixteen polls ranged from 707 to 3008. The mean sample size was 1403.

If we take the smallest poll, with a sample of 707 voters, we can use the standard deviation to see how likely it was to have a poll result in the range 48 to 49 for the Coalition. We will need to make an adjustment, as most pollsters round their results to the nearest whole percentage point before publication.

So the question we will ask is if we assume the population voting intention for the Coalition was 48.625 per cent (the arithmetic mean of the sixteen polls), what is the probability of a sample of 707 voters being in the range 47.5 to 49.5, which would round to 48 or 49 per cent?


For samples of 707 voters, and assuming the population mean was 48.625, we would only expect to see a poll result of 48 or 49 around 40 per cent of the time. This is the area under the curve from 47.5 to 49.5 on the x-axis when the entire area under the curve sums to 1 (or 100 per cent).

We can compare this with the expected distribution for the largest sample of 3008 voters. Our adjustment here is slightly different, as the pollster, in this case, rounded to the nearest half a percentage point. So we are interested in the area under the curve from 47.75 to 49.25 per cent.


Because the sample size (\(n\)) is larger, the spread of this distribution is narrower (compare the scale on the x-axis for both charts). We would expect almost 60 per cent of the samples to produce a result in the range 48 to 49 if the population mean (\(\mu\)) was 48.625 per cent.

We can extend this technique to all sixteen polls. We can find the proportion of all possible samples we would expect to generate a published poll result of 48 or 49. We can then multiply these probabilities together to get the probability that all sixteen polls would be in the range. Using this method, I estimate that there is a one in 49,706 chance that all sixteen polls should be in the range 48 to 49 for the Coalition (if the polls were independent random samples of the population, and the population mean was 48.625 per cent).

Chi-squared goodness of fit

Another approach is to apply a Chi-squared (\(\chi^2\)) test for goodness of fit to the sixteen polls. We can use this approach because the Central Limit Theorem tells us that the poll results should be normally distributed around the population mean. The Chi-squared test will tell us whether the poll results are normally distributed or not. In this case, the formula for the Chi-squared statistic is:

$$ \chi^2 = \sum_{i=1}^k {\biggl( \frac{x_i - \mu}{\sigma_i} \biggr)}^2 $$

Let's step through this equation. It is nowhere near as scary as it looks. To calculate the Chi-squared statistic, we do the following calculation for each poll:
  • First, we calculate the mean deviation for the poll by taking the published poll result (\(x_i\)) and subtracting the population mean \(\mu\), which we estimated using the arithmetic mean for all of the polls. 
  • We then divide the mean deviation by the standard deviation for the poll (\(\sigma_i\)), and then we
  • square the result (multiply it by itself) - this ensures we get a positive statistic in respect of every poll. 
Finally, we sum these (\(k=16\)) squared results from each of the polls.

If the polls are normally distributed, the absolute difference between the poll result and the population mean (the mean deviation) should be around one standard deviation on average. For sixteen polls that were normally distributed around the population mean, we would expect a Chi-squared statistic around the number sixteen.

If the Chi-squared statistic is much less than 16, the poll results could be under-dispersed. If the Chi-squared statistic is much more than 16, then the poll results could be over-dispersed. For sixteen polls (which have 15 degrees of freedom, because our estimate for the population mean (\(\mu\)) is constrained by and comes from the 16 poll results), we would expect 99 per cent of the Chi-squared statistics to be between 4.6 and 32.8.

The Chi-squared statistic I calculate for the sixteen polls is 1.68, which is much less than the expected 16 on average. I can convert this 1.68 Chi-squared statistic to a probability for 15 degrees of freedom. When I do this, I find that if the polls were truly independent and random samples, (and therefore normally distributed), there would be a one in 108,282 chance of generating the narrow distribution of poll results we saw prior to the 2019 Federal Election. We can confidently say the published polls were under-dispersed.


Note: If I was to use the language of statistics, I would say our null hypothesis (\(H_0\)) has the sixteen poll results normally distributed around the population mean. Now if the null hypothesis is correct, I would expect the Chi-squared statistic to be in the range 4.6 and 32.8 (99 per cent of the time). However, as our Chi-squared statistic is outside this range, we reject the null hypothesis for the alternative hypothesis (\(H_a\)) that collectively, the poll results are not normally distributed.

Why the difference?

It is interesting to speculate on why there is a difference between these two approaches. While both approaches suggest the poll results were statistically unlikely, the Chi-squared test says they are twice as unlikely as the first approach. I suspect the answer comes from the rounding the pollsters apply to their raw results. This impacts on the normality of the distribution of poll results. In the Chi-squared test, I did not look at rounding.

So what went wrong?

There are really two questions here:
  • Why were the polls under-dispersed; and
  • On the day, why did the election result differ from the sixteen prior poll estimates?

To be honest, it is too early to tell with any certainty, for both questions. But we are starting to see statements from the pollsters that suggest where some of the problems may lie.

A first issue seems to be the increased use of online polls. There are a few issues here:
  • Finding a random sample where all Australians have an equal chance of being polled - there have been suggestions of too many educated and politically active people are in the online samples.
  • Resampling the same individuals from time to time - meaning the samples are not independent. (This may explain the lack of noise we see in polls in recent years). If your sample is not representative, and it is used often, then all of your poll results would be skewed.
  • An over-reliance on clever analytics and weights to try and make a pool of online respondents look like the broader population.  These weights are challenging to keep accurate and reliable over time.
More generally, regardless of the polling methodology:
  • the use of weighting, where some groups are under-represented in the raw sample frame can mean that sample errors get magnified.
  • not having quotas and weights for all the factors that align somewhat with cohort political differences can mean polls accidentally do not sample important constituencies.

Like Kevin Bonham, I am not a fan of the following theories
  • Shy Tory voters - too embarrassed to tell pollsters of their secret intention to vote for the Coalition.
  • A late swing after the last poll.

Code snippet

To be transparent about how I approached this task, the python code snippet follows.
import pandas as pd 
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt

import sys
sys.path.append( '../bin' )
plt.style.use('../bin/markgraph.mplstyle')

# --- Raw data
sample_sizes = (
    pd.Series([3008, 1000, 1842, 1201, 1265, 1644, 1079, 826, 
        2003, 1207, 1000, 826, 2136, 1012, 707, 1697]))
measurements = ( # for Labor:
    pd.Series([51.5, 51,   51,   51.5, 52,   51,   52,   51,  
        51,   52,   51,   51,  51,   52,   51,  52]))
roundings =   (
    pd.Series([0.25, 0.5,  0.5,  0.25, 0.5,  0.5,  0.5,  0.5, 
        0.5,  0.5,  0.5,  0.5, 0.5,  0.5,  0.5, 0.5]))

# some pre-processing
Mean_Labor = measurements.mean()
Mean_Coalition = 100 - Mean_Labor
variances = (measurements * (100-measurements)) / sample_sizes 
standard_deviations = pd.Series(np.sqrt(variances)) # sigma

print('Mean measurement: ', Mean_Labor)
print('Measurement counts:\n', measurements.value_counts())
print('Sample size range from/to: ', sample_sizes.min(), 
    sample_sizes.max())
print('Mean sample size: ', sample_sizes.mean())


# --- Using normal distributions
print('-----------------------------------------------------------')
individual_probs = []
for sd, r in zip(standard_deviations.tolist(), roundings):
    individual_probs.append(stats.norm(Mean_Coalition, sd).cdf(49.0 + r) - 
        stats.norm(Mean_Coalition, sd).cdf(48.0 - r))

# print individual probabilities for each poll
print('Individual probabilities: ', individual_probs)

# product of all probabilities to calculate combined probability
probability = pd.Series(individual_probs).product()
print('Overall probability: ', probability)
print('1/Probability: ', 1/probability)


# --- Chi Squared - check normally distributed - two tailed test
print('-----------------------------------------------------------')
dof = len(measurements) - 1 ### degrees of freedom
print('Degrees of freedom: ', dof)
X = pow((measurements - Mean_Labor)/standard_deviations, 2).sum()
X_min = stats.distributions.chi2.ppf(0.005, df=dof)
X_max = stats.distributions.chi2.ppf(0.995, df=dof)
print('Expected X^2 between: ', round(X_min, 2), ' and ', round(X_max, 2))
print('X^2 statistic: ', X)
X_probability = stats.chi2.cdf(X , dof)
print('Probability: ', X_probability)
print('1/Probability: ', 1 / X_probability)


# --- Chi-squared plot
print('-----------------------------------------------------------')
x = np.linspace(0, X_min + X_max, 250)
y = pd.Series(stats.chi2(dof).pdf(x), index=x)

ax = y.plot()
ax.set_title('$\chi^2$ Distribution: degrees of freedom='+str(dof))
ax.axvline(X_min, color='royalblue')
ax.axvline(X_max, color='royalblue')
ax.axvline(X, color='orange')
ax.text(x=(X_min+X_max)/2, y=0.00, s='99% between '+str(round(X_min, 2))+
    ' and '+str(round(X_max, 2)), ha='center', va='bottom')
ax.text(x=X, y=0.01, s='$\chi^2 = '+str(round(X, 2))+'$', 
    ha='right', va='bottom', rotation=90)

ax.set_xlabel('$\chi^2$')
ax.set_ylabel('Probability') 

fig = ax.figure
fig.set_size_inches(8, 4)
fig.tight_layout(pad=1)
fig.text(0.99, 0.0025, 'marktheballot.blogspot.com.au',
        ha='right', va='bottom', fontsize='x-small', 
        fontstyle='italic', color='#999999') 
fig.savefig('./Graphs/Chi-squared.png', dpi=125) 
plt.close()


# --- some normal plots
print('-----------------------------------------------------------')
mu = Mean_Coalition

n = 707
low = 47.5
high = 49.5

sigma = np.sqrt((Mean_Labor * Mean_Coalition) / n)
x = np.linspace(mu - 4*sigma, mu + 4*sigma, 200)
y = pd.Series(stats.norm.pdf(x, mu, sigma), index=x)

ax = y.plot()
ax.set_title('Distribution of samples: n='+str(n)+', μ='+
    str(mu)+', σ='+str(round(sigma,2)))
ax.axvline(low, color='royalblue')
ax.axvline(high, color='royalblue')
ax.text(x=low-0.5, y=0.05, s=str(round(stats.norm.cdf(low, 
    loc=mu, scale=sigma)*100.0,1))+'%', ha='right', va='center')
ax.text(x=high+0.5, y=0.05, s=str(round((1-stats.norm.cdf(high, 
    loc=mu, scale=sigma))*100.0,1))+'%', ha='left', va='center')
mid = str( round(( stats.norm.cdf(high, loc=mu, scale=sigma) - 
    stats.norm.cdf(low, loc=mu, scale=sigma) )*100.0, 1) )+'%'
ax.text(x=48.5, y=0.05, s=mid, ha='center', va='center')

ax.set_xlabel('Per cent')
ax.set_ylabel('Probability') 

fig = ax.figure
fig.set_size_inches(8, 4)
fig.tight_layout(pad=1)
fig.text(0.99, 0.0025, 'marktheballot.blogspot.com.au',
        ha='right', va='bottom', fontsize='x-small', 
        fontstyle='italic', color='#999999') 
fig.savefig('./Graphs/'+str(n)+'.png', dpi=125) 
plt.close()

# ---
n = 3008
low = 47.75
high = 49.25

sigma = np.sqrt((Mean_Labor * Mean_Coalition) / n)
x = np.linspace(mu - 4*sigma, mu + 4*sigma, 200)
y = pd.Series(stats.norm.pdf(x, mu, sigma), index=x)

ax = y.plot()
ax.set_title('Distribution of samples: n='+str(n)+', μ='+
    str(mu)+', σ='+str(round(sigma,2)))
ax.axvline(low, color='royalblue')
ax.axvline(high, color='royalblue')
ax.text(x=low-0.25, y=0.3, s=str(round(stats.norm.cdf(low, 
    loc=mu, scale=sigma)*100.0,1))+'%', ha='right', va='center')
ax.text(x=high+0.25, y=0.3, s=str(round((1-stats.norm.cdf(high, 
    loc=mu, scale=sigma))*100.0,1))+'%', ha='left', va='center')
mid = str( round(( stats.norm.cdf(high, loc=mu, scale=sigma) - 
    stats.norm.cdf(low, loc=mu, scale=sigma) )*100.0, 1) )+'%'
ax.text(x=48.5, y=0.3, s=mid, ha='center', va='center')

ax.set_xlabel('Per cent')
ax.set_ylabel('Probability') 

fig = ax.figure
fig.set_size_inches(8, 4)
fig.tight_layout(pad=1)
fig.text(0.99, 0.0025, 'marktheballot.blogspot.com.au',
        ha='right', va='bottom', fontsize='x-small', 
        fontstyle='italic', color='#999999') 
fig.savefig('./Graphs/'+str(n)+'.png', dpi=125) 
plt.close()

Sunday, May 19, 2019

A polling failure and a betting failure

Well, that went bad for the pollsters. Every poll published during the election campaign got it wrong. Collectively the polls suggested Labor would win around 51.5 per cent of the two-party preferred vote; at this stage in the count, it looks more like 49 per cent for Labor to the Coalition's 51 per cent.

I am as surprised as most. While it was obvious that the pollsters were doing something that reduced polling noise (and hopefully increased the polling signal), I assumed they knew what they were doing. What I really wanted was for the pollsters to tell us (the consumers of their information) how it was made: because it ain't what it says on the tin.

The 16 published polls since the commencement of the election campaign did not have the numerical features a statistician would expect from independent, representative and randomly sampled opinion polls. They did not look normally distributed around a population mean (even one that may have been moving over time). In short, the polls were under-dispersed.

I was troubled by the under-dispersion in the polls (here, here, and here), and I knew this could increase the risk of a polling failure. But I was not expecting a massive failure as such. Consistent with the polls, I thought the most likely outcome was a Labor victory in the order of 80 seats (plus or minus a few), with the Coalition to pick up around 65 and for others to land around 6 seats (80-65-6). The final result could end up being closer to 68-77-6. While a polling failure was possible, perhaps even 30 per cent likely, I did not think it the most likely outcome. Let's chalk it up to living in the Canberra bubble and confirmation bias.

I was also a little annoyed. The Bayesian aggregation technique I use makes the most use of the data at either end of the normal distribution around the population mean. Yet this data was implausibly missing on the public record. You don't need an aggregator when every poll result is in the range 48-49 to 51-52. There is nothing needing clarity on those results.

Because I assumed the pollsters were smoothing their own polls, I wondered what raw results they were actually seeing. Compared with February and March (Coalition on 47 per cent in round terms), the collective April and May poll results were substantially different (48.5 per cent). It is almost as if the public's mood shifted one and a half percentage points overnight with the 2 April Morrison Budget (and I am a long-standing sceptic about the capacity for Budgets to shift public opinion). To smooth so quickly to a substantially different number seemed unusual and analytically complicated. I wondered a number of times whether the pollsters had seen a 50 or a 51 or even a 52 for the Coalition in their raw data before smoothing (indeed, thinking about the missing inliers and outliers was how I got to being troubled by the polls).

What next: Something has to change. Like the United Kingdom, which had a similar scale polling failure with its 2015 general election, we need an inquiry into what went wrong. We also need way more transparency. Pollsters need to explain their methodology better and publish more on the pre-publication processing they undertake.

At least the myth of bookmakers knowing best has been put to bed. The bookmakers had a bad day too: especially Sportsbet, which had paid out early on a Labor win.

Postscript

Thanks to the Poll Bludger for the recognition. And some further reflections at Poll Bludger.

It is nice to see that my questioning of the under-dispersed in the polls means that I am now labelled a hardcore psephologist (albeit before the election).

The postmortem at freerangestats.info is worth reading.

A great election postmortem by Kevin Bonham.

The mathematics does not lie: why polling got the Australian election wrong, By Brian Schmidt.

Saturday, May 18, 2019

Pre-polling in 2019

We have had a substantial pre-poll turn-out at the 2019 election.


The cope snippet for this chart follows.

import numpy as np 
import pandas as pd 
import matplotlib.pyplot as plt
import sys
sys.path.append( '../bin' )
plt.style.use('../bin/markgraph.mplstyle') 

# --- downloaded data files from the AEC
pp2010 = './Data/e2010-prepoll-stats-19-08.csv'
pp2013 = './Data/e2013-prepoll-stats-07-09.csv'
pp2016 = './Data/20160702_WEB_Prepoll_Report.csv'
pp2019 = './Data/20190518_WEB_Pre-poll_Report_FE2019.csv'

# --- build a comparative table from the AEC files
elections = ['2010-08-21', '2013-09-07', '2016-07-02', '2019-05-18']
years = [e[0:4] for e in elections]
files = [globals()[y] for y in ['pp' + x for x in years]]

for (y, f, e) in zip (years, files, elections):
    print(y)
    df = pd.read_csv(f)
    
    # - delete n initial columns - sum to daily totals - calculate index
    if y == '2010':
        n = 2
    elif y in ['2013', '2016']:
        n = 3
    elif y == '2019':
        n = 4
    s = df.drop(labels=df.columns[0:n], axis=1).sum()
    s.index = pd.to_datetime(arg=s.index.values, dayfirst=True)
    s.index = s.index - np.datetime64(e, 'D')
    
    # - build up the comparative table for each election
    if y == '2010':
        table = pd.DataFrame([s], index=[y]).T
    else:
        table = table.reindex(index=pd.Index.union(table.index, s.index))
        table[y] = s

# --- tidy up - present as cumsum - metric is millions - make index an int
table = (table / 1_000_000).fillna(0).cumsum()
table.index = (table.index.values / np.timedelta64(1, 'D')).astype(int) 

# --- plot the comparatve table
ax = table.plot()
ax.set_title('Cumulative Pre-Poll Numbers')
ax.set_xlabel('Days prior to the Election')
ax.set_ylabel('Millions pre-polled') 
fig = ax.figure
fig.set_size_inches(8, 4)
fig.tight_layout(pad=1)
fig.text(0.99, 0.0025, 'marktheballot.blogspot.com.au',
        ha='right', va='bottom', fontsize='x-small', 
        fontstyle='italic', color='#999999') 
fig.savefig('./Graphs/Pre-poll.png', dpi=125) 
plt.close()

A last fleeting look at the betting markets

At 8.10am on the morning of the election, the pollsters have Labor as the clear favourite.

House Coalition Odds ($) Labor Odds ($) Coalition Win Probability (%)
2019-05-18 BetEasy 6.00 1.13 15.85
2019-05-18 Ladbrokes 7.00 1.10 13.58
2019-05-18 Sportsbet 5.75 1.14 16.55


Turning to the individual seat markets at Sportsbet, the summary charts follow. These are consistent with a TPP vote for Labor around 51.5 per cent.



For the individual seat odds, the implied majority government formation is 96.5 per cent. The Coalition has no chance of forming majority governmnet.



The 151 time-series charts for each seat follow. Because the bookmakers have such humongous over-rounds with their individual seat odds, I take a fairly savage approach with the long-shot odds.

  • For odds between $1.01 and $1.02 (bookmaker raw probabilities between 98 and 99 per cent), I have treated the seat as having a probability of 100 per cent for the favourite.  
  • For odds between $1.03 and $1.50 (bookmaker raw probabilities between 66.7 and 97 per cent), I have done all of the normalisation for the bookmaker's over-round in terms of the non-favourite parties.  
  • For odds in excess of $9.99, I have simply ignored them. 

Note: I have set the odds for Katter in Kennedy to $1.05, as this betting option was suspended.



















































































Note: betting on Katter in Kennedy was suspended. Estimate based on Labor odds.






































































The key odds that inform the above charts follow.


Labor Coalition Liberal (Coalition) National (Coalition) Greens Independent Centre Alliance Katter's Australian Party Shooters, Fishers and Farmers
Adelaide (SA) 1.08 6

31



Aston (VIC) 4.4 1.2

41



Ballarat (VIC) 1.01 11

34 21


Banks (NSW) 2.9 1.35

16



Barker (SA) 12 1.03

51
9

Barton (NSW) 1.01 11

21



Bass (TAS) 1.38 2.75

21



Bean (ACT) 1.03 10

31 31


Bendigo (VIC) 1.04 8.5

41



Bennelong (NSW) 8 1.05

21



Berowra (NSW) 11 1.01

21 31


Blair (QLD) 1.1 10






Blaxland (NSW) 1.01 11

61



Bonner (QLD) 3.2 1.3

61



Boothby (SA) 3 1.3

21 26


Bowman (QLD) 4.75 1.15

31



Braddon (TAS) 1.9 1.8

61 10


Bradfield (NSW) 11 1.01

41



Brand (WA) 1.08 7

31



Brisbane (QLD) 3.2 1.35

8



Bruce (VIC) 1.01 11

51



Burt (WA) 1.01 11

21 26

9.5
Calare (NSW) 12 1.1

31


5.5
Calwell (VIC) 1.01 11

26



Canberra (ACT) 1.01 12

11 26


Canning (WA) 5.5 1.1

26



Capricornia (QLD) 1.85 1.85


26


Casey (VIC) 2.85 1.4

36 21


Chifley (NSW) 1.01 11

21 31


Chisholm (VIC) 1.35 3


21


Clark (TAS) 11 21

31 1.01


Cook (NSW) 11 1.01

34



Cooper (VIC) 1.14 31

4.5 41


Corangamite (VIC) 1.32 3.1

26 46


Corio (VIC) 1.01 11

34



Cowan (WA) 1.16 4.5

31



Cowper (NSW) 21 1.93

51 1.72


Cunningham (NSW) 1.01 11

16



Curtin (WA) 14 1.1

31 5.5


Dawson (QLD) 2.2 1.6

51

8
Deakin (VIC) 2 1.77

31 31


Dickson (QLD) 1.55 2.35

31



Dobell (NSW) 1.06 7.5

21 31


Dunkley (VIC) 1.1 5.75

26



Durack (WA) 7.5 1.06

31



Eden-Monaro (NSW) 1.05 8

21 12


Fadden (QLD) 8 1.05

31



Fairfax (QLD) 11 1.01

21 21


Farrer (NSW) 31 1.85


1.8


Fenner (ACT) 1.01 11

41



Fisher (QLD) 6.5 1.08

34



Flinders (VIC) 2.4 1.5

31 16


Flynn (QLD) 2.2 1.6

21 18


Forde (QLD) 1.35 3

51



Forrest (WA) 11 1.01

21 31

26
Fowler (NSW) 1.01 12

14



Franklin (TAS) 1.01 11

31



Fraser (VIC) 1.01 11

41 21


Fremantle (WA) 1.01 11

21



Gellibrand (VIC) 1.01 61

11



Gilmore (NSW) 1.3
3 12 31



Gippsland (VIC) 21 1.01

51 31

11
Goldstein (VIC) 9 1.03

51 24


Gorton (VIC) 1.01 26

11 21


Grayndler (NSW) 1.01 51

11



Greenway (NSW) 1.01 11

16



Grey (SA) 5.5 1.14

21 26 10

Griffith (QLD) 1.1 21

6



Groom (QLD) 11 1.01






Hasluck (WA) 1.33 2.9

21


31
Herbert (QLD) 3.3 1.26

36

26
Higgins (VIC) 3.5 1.72

2.5



Hindmarsh(SA) 1.01 11

31



Hinkler (QLD) 7.5 1.07


18


Holt (VIC) 1.01 11

34



Hotham (VIC) 1.01 11

21



Hughes (NSW) 4.3 1.18

16 31


Hume (NSW) 6 1.12

21 31


Hunter (NSW) 1.01 11

21



Indi (VIC) 16 1.6

12 2.2


Isaacs (VIC) 1.01 11

31



Jagajaga (VIC) 1.02 10

26



Kennedy (QLD) 10 8

41

1.05
Kingsford Smith (NSW) 1.01 11

31



Kingston (SA) 1.01 11

26



Kooyong (VIC) 12 1.18

4 21


La Trobe (VIC) 1.33 3

34



Lalor (VIC) 1.01 11

41



Leichhardt (QLD) 3.1 1.32

31

6.5
Lilley (QLD) 1.01 11

21



Lindsay (NSW) 2.02 1.64

21



Lingiari (NT) 1.28 3.2

26 36


Longman (QLD) 1.22 3.5

41



Lyne (NSW) 11 1.01


21


Lyons (TAS) 1.02 9

32



Macarthur (NSW) 1.01 11

31



Mackellar (NSW) 26 1.01

21 11


Macnamara (VIC) 1.5 12

2.55 51


Macquarie (NSW) 1.05 8

26



Makin (SA) 1.01 11

41



Mallee (VIC) 12 1.2

51 4

10
Maranoa (QLD) 21 1.1




10
Maribyrnong (VIC) 1.01 11

61



Mayo (SA) 31 6.5

41
1.08

McEwen (VIC) 1.01 11

41 21


McMahon (NSW) 1.01 11

21



McPherson (QLD) 11 1.01

21 21


Melbourne (VIC) 11 16

1.01 51


Menzies (VIC) 5 1.14

41 16


Mitchell (NSW) 11 1.01

21



Monash (VIC) 3.5 1.29

26 21


Moncrieff (QLD) 11 1.01

31



Moore (WA) 6 1.1

21 31


Moreton (QLD) 1.04 8.5

31



New England (NSW) 11 1.01

61 16


Newcastle (NSW) 1.01 12

21



Nicholls (VIC) 11 1.01

61 26


North Sydney (NSW) 11 1.01

18 31


O'Connor (WA) 6.5 1.1

26



Oxley (QLD) 1.01 11

18



Page (NSW) 2.3 1.5

31 12


Parkes (NSW) 5.5 1.1

12 31


Parramatta (NSW) 1.07 4.5

41



Paterson (NSW) 1.01 11

41



Pearce (WA) 2.4 1.5

21 26

15
Perth (WA) 1.07 7

26



Petrie (QLD) 1.66 2

36



Rankin (QLD) 1.01 11

21



Reid (NSW) 1.8 1.9

31



Richmond (NSW) 1.1 12

6 51


Riverina (NSW) 11 1.01

41



Robertson (NSW) 1.38 2.8

31 41


Ryan (QLD) 4 1.18

7



Scullin (VIC) 1.01 11

41 31


Shortland (NSW) 1.01 11

31



Solomon (NT) 1.35 2.85

31 26


Spence (SA) 1.01 11

21 41


Stirling (WA) 1.72 2

31



Sturt (SA) 6.5 1.1

31 26


Swan (WA) 1.55 2.35

26



Sydney (NSW) 1.01 16

11



Tangney (WA) 11 1.01

26 31


Wannon (VIC) 8 1.05

21 31


Warringah (NSW) 51 2.65

41 1.4


Watson (NSW) 1.01 11

21



Wentworth (NSW) 41 1.22

21 4


Werriwa (NSW) 1.01 12

31



Whitlam (NSW) 1.01 12






Wide Bay(QLD) 5.5 1.11


26


Wills (VIC) 1.17 51

4.5



Wright (QLD) 6.5 1.08




26