Opinion pollsters will be nervously awaiting the final election result, hoping that the abstruse art of statistical sampling will have worked its magic again.
When hard-core numbers fall victim to human nature
The two US presidential candidates and their supporters will not be the only ones fretting about the vote count during tomorrow's election. Opinion pollsters will also be nervously awaiting the final result, hoping that the abstruse art of statistical sampling will have worked its magic again, and turned the responses of a tiny proportion of the population into a reliable guide to the views of an entire nation. The idea that a poll of a few thousand people can predict even remotely well the voting intentions of over 200 million is somewhat counter-intuitive. Yet as long as those polled are both sufficient in number and representative, statistical theory shows the results will be reliable up to a point - and even gives an estimate of that reliability. As common sense suggests, the bigger the sample, the smaller the margin of error. More precisely, if the poll estimates the percentage share of the vote using a sample of N people, then the percentage margin of error is given by the formula 100 divided by the square-root of N. For the typical opinion poll, which is based on around 1,000 people, this means the margin of error in the share of the vote is around plus or minus 3 per cent. So what are we to make of the decidedly disparate results of various polls published last week? One survey based on 1,198 likely voters had Barack Obama leading John McCain by 15 percentage points. Even with that plus or minus 3 per cent margin of error for those two percentages, that's a clear lead for Mr Obama. But another poll carried out around the same time and based on a similar size of sample found Mr Obama leading by just 5 percentage points. As Mr Obama's true share could be 3 per cent lower and Mr McCain's could be 3 per cent higher, a 5 per cent difference is too close to call. So which do we believe? The most obvious reason for the disparity is that one or both of the polls is not truly representative. Getting a representative sample is the biggest challenge facing opinion pollsters, and one which keeps them awake on the eve of many elections. Over the years they have identified many of the causes, and do their level best to weed them out. At the top of their list is making sure they do not include the views of people who are not going to vote. Research has shown that people who vote have different political views from those who do not. Including non-voters in a poll then can give a very misleading view of how the vote will actually turn out. Undecided voters are barely less problematic. Pollsters try hard to persuade those they interview to opt for one candidate or another, but some people remain undecided - leaving the pollsters with no choice but to try to guess which way they will vote. Not surprisingly, the pollsters sometimes get it badly wrong. In 1992, Gallup decided to allocate undecided voters to Bill Clinton and ended up overestimating the size of his victory by nearly 6 percentage points as many of the undecided in fact voted for Ross Perot. The most controversial issue of all is one of simple honesty. Are those being polled actually telling the truth about how they will vote? Over recent weeks there has been much discussion of the so-called Bradley Effect, named after the eponymous African-American who lost the 1982 race for the governership of California despite being ahead in opinion polls prior to the vote. According to some, Mr Bradley fell victim to voters who felt uncomfortable telling pollsters they would not vote for a black candidate, and who voted for his white rival once in the polling booth. Opinions are divided over whether Mr Obama will face a similar fate. Some psephologists insist the Bradley Effect no longer exists in these racially enlightened times; some even doubt whether it ever existed, and blame faulty handling of the undecided vote instead. But others point to a more subtle explanation - and one that may also explain another notorious failure of opinion polling, which took place in the UK general election of 1992. Seemingly against all the odds, the Conservative prime minister John Major was returned to power, having reversed the one per cent lead attributed by opinion polls to his rival into a convincing eight per cent victory. At the time, much of the blame for this humungous discrepancy between the polls and reality was directed at simple voter dishonesty. Major had promised tax cuts if re-elected, and it was argued that voters felt embarrassed admitting to pollsters that they were keen on such a policy, but had no qualms voting for it. Yet as with the Bradley Effect, some psephologists think a more subtle effect was at work - and one with the power to undermine any poll. In the end, for all their attempts to get a representative sample, opinion pollsters can only include the views of those they actually talk to. But what about all those who simply wave away the pollsters as they approach, or slam down the phone? Some psephologists argue that both the Bradley Effect of 1982 and the Shy Tory Effect of a decade later were less to do with dishonesty than with the political mores of those who refuse to take part in polls. Research by the respected US polling expert Andrew Kohut has uncovered evidence that those who refuse to take part in polls tend to have poorer, less well-educated backgrounds. As that is also the profile of those more likely to harbour racist views, it could help explain the Bradley Effect. Whether such people are also keener on tax cuts is not clear; as Mr Kohut discovered, finding out what they think about anything is, by definition, very hard work. What is clear is that unless Mr Obama had made his tax-cutting plans clear to poor, uneducated whites, he could give pollsters a very nasty shock tomorrow night. Robert Matthews is Visiting Reader in Science at Aston University, Birmingham, England