Predictions of the future? Leave it to the superforecasters

University team rises to challenge set by US intelligence community to come up with reliable ways of predicting future events by blowing away the competition with its accuracy.

North Korean leader Kim Jong-un cheers the launch of a ballistic missile. EPA
Powered by automated translation

When American intelligence set out to find thinkers capable of predicting the future, the ‘super-forecasters’ were really not as the spooks behind the tests envisaged.

Global stock markets crashing, refugee numbers soaring, nations in turmoil – these are wild times. Surely only fools would try to predict what’s going to happen next.

Yet amid all the chaos, researchers now think they have found a select band of individuals able to forecast events with uncanny accuracy.

They have been identified through a three-year study funded by the US intelligence community, and their name reflects their ability: super-forecasters.

Over the decades, the likes of the CIA have tried to get insights into future events using everyone from specialists to spies – and even psychics. The results have been, to put it charitably, mixed.

In an attempt to find out what really works, in 2011 the community’s research agency, IARPA, set up a programme of tournaments aimed at finding the most effective strategy for predicting events.

Teams of academics came forward with their ideas for how best to answer geopolitical questions like how many refugees with flee Syria next year, and whether North Korea will detonate another nuclear weapon in the next three months.

The teams were sent hundreds of questions by the agency, and the accuracy of each strategy was checked as events unfolded.

Now the results are in, and the winners are the team from the aptly-named Good Judgement Project, led by University of Pennsylvania psychologist Professor Philip Tetlock. Its forecasters blew away the competition with the reliability of its probability estimates of events.

The appearance of Prof Tetlock’s name among the winners is, ironically, a prediction most insiders would have nailed: no-one knows more about how humans go about forecasting the future.

Prof Tetlock made headlines a decade ago with his 20-year study of just how lousy “experts” are at forecasting. The results examined thousands of predictions made by almost 300 pundits.

The key finding was both shocking yet somehow entirely predictable: the typical expert performed no better than chance.

But Prof Tetlock made other, less obvious, findings. For example, political stance made no difference. Marxists proved just as unreliable in seeing the future as gung-ho right-wingers.

Those who tended to make cheery prognostications weren’t very reliable, typically assigning 65 per cent probabilities to scenarios that – 20 years on – came to pass just 15 per cent of the time.

Yet even worse were doomsters, who gave 70 per cent probabilities to bleak outcomes that materialised just 12 per cent of the time.

And at the bottom were those whose predictions were even less reliable than random guessing. Prof Tetlock found that these “experts” ignored complexity or different sources of evidence. Instead, they tended to make predictions consistent with some grand thesis (often their own), and were also bizarrely over-confident.

Prof Tetlock’s research did more than merely confirm everyone’s suspicions about “experts”, however. It also hinted at the existence of people who were surprisingly good at forecasting future events.

The most reliable forecasts tended to come from those who avoided black-and-white predictions and were willing to admit their limitations.

Ironically, they are also the least likely to appear in the media, who prefer their pundits to be clear and confident, with accuracy an optional extra.

But following their victory in the tournament Prof Tetlock and his team have now returned to the question of who makes a good forecaster – armed with much more data.

Their findings have implications for more than just the intelligence services, it may transform the whole concept of expertise.

Over the course of the US intelligence community’s tournament, Prof Tetlock’s team recruited thousands of volunteer forecasters from around the world.

They certainly all had the potential to be “experts”, being graduates with above-average scores in intelligence tests.

But one of the key findings of the study is that even smart people benefit hugely from being trained how to assess evidence.

For example, many people making predictions fall into the so-called base-rate trap - failing to take account of the inherent likeliness of events. Faced with deciding between two potential outcomes from a set of events, it always makes more sense to go with the more common outcome – or, as doctors say, “diagnose rare diseases rarely”.

Prof Tetlock and his colleagues found that forecasters who received training in such rules of thumb achieved much higher accuracy.

Putting people in teams also helped, by allowing them to pool knowledge – and be exposed to differing viewpoints.

Even so, the Good Judgement Project team found that a few per cent of the volunteers had something extra – making them super-forecasters.

Unsurprisingly, one of their defining characteristics is higher intelligence test scores – specifically fluid intelligence, which focuses on novel problem-solving and logical thinking.

But, crucially, they’re also the exact opposites of know-alls.

Super-forecasters proved strikingly more likely to challenge their own predictions and make frequent but small adjustments to them. According to the researchers, this reflected a determination to get better at what they did, and willingness to work at it.

They were also more willing to regard their beliefs as testable theories rather than sacred possessions to be defended at any cost.

With their proven skills, a whole team of super-forecasters should surely perform better than a team of ho-hum performers. But given the dire impact of overconfidence on forecast reliability, the researchers wondered about the impact of telling the superforecasters they were in an elite group – and the risk of swapping accuracy for arrogance that potentially came with it. Again, the super-forecasters confounded expectations by doing even better after being told how good they were. They were then put in groups and working together they produced far more reliable forecasts than other groups.

In fact, in a head-to-head competition on 175 specific questions, the super-forecasters outperformed the least able in 169 questions.

So are super-forecasters different kinds of people – or just ordinary people who do things differently? “Our data suggests that the answer is a blend of both,” say Prof Tetlock and his colleagues, reporting in Perspectives on Psychological Science.

Their research raises a host of intriguing questions. Can super-forecasters become even better, overcoming their own quirks with years more experience? Can we all learn to be, if not super-forecasters, then at least less bad at seeing the future?

And then there’s the biggest question of all: can those in power ever accept forecasts they don’t want to hear?

Robert Matthews is visiting professor of science at Aston University, Birmingham