Scoop has an Ethical Paywall
Licence needed for work use Learn More
Top Scoops

Book Reviews | Gordon Campbell | Scoop News | Wellington Scoop | Community Scoop | Search

 

Soapbox Column:Poll Position

Poll position

Matthew Thomas

Soapbox 0042, 1999-07-04

Well, it's happened again.

What's happened? Oh, just another political opinion poll announced. Just another occasion when a news agency, apparently lost for real news to report, decides to tell us what they think we think.

This poll in particular was carried out by UMR-Insight for the National Business Review last Friday. It reported that Labour was on 44 percent support, National on 33 percent, the Alliance at 7 percent, and ACT on 5 percent. Predictably, there was joy in Labour ranks and nervousness amongst National supporters.

Personally, I wasn't so much interested in the results as in the existence of the poll itself. Because I find the whole idea of opinion polls deeply disturbing.

On a superficial level, I am disturbed that the two main TV networks consider poll results so important that they regularly place them at the top of news bulletins. Surely there's something basically wrong with that -- that a statistical survey on a hypothetical question (`if an election was held today ...') should be seen as more important than things which are actually happening in the world?

On a deeper level, I am tempted to describe opinion polls as anti-democratic. Anti-democratic because they are frequently wrong, and yet their results guide the decisions of voters to an alarming degree.

Advertisement - scroll to continue reading

Take the last couple of weeks, for example. The National Business Review poll was in rough agreement with a TV3 poll carried out two weeks ago, which had Labour on 46 percent and National at 33 percent. But a TV1 poll, carried out at the same time as the TV3 poll, had Labour on 42 and National on 41.

Something is seriously wrong here.

Of course, opinion polls are allowed to be inaccurate, of course: that's why they have that margin-of-error thingy, right? Well, not quite. Unfortunately, explaining why not is a rather dull and dreary affair, but here goes, anyway.

The problem with a margin of error is that it makes the public (and often even the news media) think that a poll is more accurate than it actually is. Let's take National's 41 percent from the TV1 poll as an example. TV1 reported that their poll has a margin of error of 3.5 percent. So National's true support must be within 37.5% (41% - 3.5%) and 44.5% (41% + 3.5%), right? Wrong. And wrong for three reasons.

Firstly, the margin of error is derived from the number of people polled (1000 in the TV1 case). And as long as the sample size is relatively miniscule in comparison with the size of the population (which, in this case, is the total number of enrolled voters), there is always going to be a possibility that the actual proportions of support for each party are vastly different from what your poll suggests.

For this reason, a pollster can only say that they are confident that the difference between the poll result and the actual support level in the population is less than the margin of error. Just how confident the pollster is is measured by something called (surprisingly) the confidence level. For most opinion polls, this confidence level is 95 percent. That is, in 95 % of polls, you can be sure that the difference between the actual support for a party and the support reported by a poll will be within the margin of error.

But actually, that's not quite correct either. Which is where the other two reasons come in.

The second reason it is dangerous to rely on the margin of error is that it only applies to a hypothetical situation -- a party (or politician) getting exactly 50 percent support. The reason for this can be seen from a simple example, that of New Zealand First. The TV1 poll shows New Zealand First on 1.9 percent; and the margin of error, as I said, is 3.5 percent, with a confidence level of 95 percent. Does that mean that somewhere in that 95-percent chunk of confidence, there is a chance that support for New Zealand First is negative 1.6 percent (1.9% - 3.5%)? Of course not.

At New Zealand First's level of support, the margin of error isn't 3.5 percent on either side -- as a result approaches zero percent (or 100 percent), the margin of error changes. So to describe New Zealand First's support as `below the margin of error', as political commentators are very fond of doing, is to make a statement based on a false assumption about the margin of error -- let alone the fact that such a statement is utterly meaningless in the first place, since our electoral system takes no notice of margins of error in polls.

A similar trap comes up when trying to compare two results in the same poll, or between polls. Often you will hear commentators saying that the difference between two parties in a particular poll is `less than the margin of error'. They're usually right when they say this, but again they're basing their statement on a false assumption -- that the margin of error for the difference between two parties is the same as the margin of error for individual parties in the same poll. It's not. When comparing the results of two parties (or politicians), the formula for calculating the margin of error is a completely different from that used for a single result -- and it gives a larger value. So you can't actually be sure that one party has significantly greater support than another party, unless the difference between the two poll results is larger than this other, greater, margin of error.

The third reason you shouldn't rely on margins of error is that they only take into account errors caused by random sampling -- that is, errors caused solely by the fact that only a sample of voters are being polled, rather than the entire voting population. The margin of error doesn't take into account any other kind of errors. One kind of error is bad poll design -- for example, assuming that `favourite politician' is the same thing as `preferred prime minister', when it's not. Another source of error can be a flawed method of obtaining the sample -- not everyone has a telephone, for example, and not all of those who give their opinion in a poll will bother to vote. You can see such non-sampling errors at work in the comparison between the TV1 and TV3 polls -- such disparities between polls, which the margins of error are too small to explain, occur far too often to be explained by the confidence level either; so there must be some other, non-sampling, errors occurring.

But the public tends not to care about such statistical problems. And that's a shame. Because while public opinion polls are more shonky than they're made out to be by the news media, the public is increasingly relying on them to determine their own opinions and actions.

I remember being highly amused when Jenny Shipley took over Jim Bolger's job as leader of the National Party, because support for her in the preferred-prime-minister stakes shot up, while support for Jim Bolger plummeted. Why? In theory, the leadership change shouldn't have made any difference -- whether a particular person actually had the job or not should not have affected whether people actually wanted them to have the job, but it did.

This isn't the pollsters' fault, to be sure, but rather a tendency for people to want to `back a winner'. No doubt there are dozens of New Zealanders who would make better prime ministers than the usual suspects who appear in the preferred-prime-minister polls, but people don't bother giving their true opinion about this to the pollsters, because they `know' (from the results of previous polls) that it won't matter -- so they restrict their choice to current Members of Parliament, or even only to current front-benchers on each side of the House.

I often wonder if the same effect applies to political parties as a whole. How much larger would support have been for the Christian Coalition at the 1996 election, for example, if people hadn't been told by opinion polls that if you supported that party, you ran the risk of `wasting' your vote on a party that might not get into Parliament at all? (ACT can't be used as a counterexample for this theory: voters knew that ACT was very likely to be in Parliament anyway, thanks to Richard Prebble's electorate seat, but Christian Heritage had to make it on the party vote or not at all.)

It is for this reason that I think that political polls are anti-democratic -- too often they present results as a fait accompli, and act as barriers to entry for small political parties; in a vicious cycle, the polls persuade the voters that the parties are not worth voting for, which in turn gives the parties low poll ratings.

Any attempt to muzzle political pollsters would, I guess, be vigorously opposed as an infringement of their freedom of speech. So, it seems that the problem of polls skewing the democratic process is a problem that we'll just have to live with -- until, perhaps, the polls start becoming so inaccurate that we stop paying any attention to them at all.


Copyright (C) 1999 Matthew Thomas


© Scoop Media

Advertisement - scroll to continue reading
 
 
 
Top Scoops Headlines

 
 
 
 
 
 
 
 
 
 
 
 

Join Our Free Newsletter

Subscribe to Scoop’s 'The Catch Up' our free weekly newsletter sent to your inbox every Monday with stories from across our network.