What are we to make of this month's slew of economic forecasts from the International Monetary Fund and various financial institutions across the region?
Will the GCC economies really grow faster than last year - as some claim - or slower, as others insist? And is GDP growth about to slow dramatically, as the IMF predicts?
Past experience may be no guide to the future in financial matters, but for judging the reliability of forecasts, it's pretty reliable.
History shows that most forecasts are wide of the mark, but some will prove pretty accurate. The problem is, we won't be able to tell which ones until after the event.
Or can we? Is there perhaps a way of distinguishing the better forecasters from the lousy ones ahead of time?
Economists like to see everything through the prism of "market forces". That suggests that institutions with the most to gain will try hardest to produce good forecasts.
Yet a new analysis of currency value forecasts issued by leading banks over the past decade suggests even they aren't up to the job.
Prof Gerd Gigerenzer, a leading risk expert at the Max Planck Institute for Human Development in Berlin, found that the forecasts were seriously wrong in nine of the 10 years covered.
"It's hard to predict currency values worse than the banks did," Prof Gigerenzer told the magazine Science News. "Highly paid people produced worthless predictions".
So what are they doing wrong? Prof Gigerenzer is among those who believe part of the problem lies in confusing uncertainties and risks.
Forecast models use historical data to get a handle on the uncertainties - at least, up to a point. Put simply, they sift through heaps of data, find a trend, and then estimate the likely error bars representing the uncertainty in that trend.
The problem is that the standard theory used to estimate uncertainties makes certain assumptions about how uncertainty behaves and these can be disastrously wrong.
Specifically, they're usually assumed to follow a so-called Gaussian or "normal" distribution, popularly known as the bell curve, where the chances of extreme deviations from the average are pretty low.
While this is often the case, it's known to be an unreliable assumption in finance, where extreme events of huge size can occur much more frequently than the Gaussian distribution predicts.
This leads to one rule of thumb when looking for reliable predictions of the future. Lousy forecasters make dubious assumptions about how to deal with uncertainty - and then conflate it with risk, which is far harder to assess.
Other telltale signs emerged from a celebrated study of expert forecasting by Prof Philip Tetlock, now at the University of Pennsylvania.
In the mid-1980s, he began a 20-year study to determine not just the accuracy of such forecasts, but also the reasons that some are more reliable than others.
The results, published in 2005 and based on interviews with more than 280 forecasters, confirmed what cynics have long suspected. Pundits with a reputation for painting a rosy future were risibly unreliable, typically assigning 65 per cent probabilities to scenarios that - 20 years on - came to pass just 15 per cent of the time.
But even worse were doom-mongers, who gave 70 per cent probabilities to bleak outcomes that materialised just 12 per cent of the time.
The most reliable forecasts came from neither gloomy pessimists or wide-eyed optimists - which is ironic, as their entertainment value makes these types most likely to pop up in the media. Instead, it was pundits who embraced complexity, gave less clear-cut forecasts and admitted their limitations who performed best (although even then, rarely better than chance).
In short, these types accept that their views can be ruined by sources of risk too complex to assess. And there's no shortage of those in today's global, interconnected and leveraged financial systems. Yet while forecasts hedged with caveats might save the blushes of experts, they're not much use to anyone needing clear-cut guidance - such as investors.
What are they supposed to do? Again, there has been no shortage of experts offering advice. Indeed, Nobel Prizes have been awarded to economists claiming to have found the best ways of investing in the face of uncertainty.
Perhaps the best known of these is the so-called Modern Portfolio Theory, which won its originator Harry Markowitz, now at the University of California, San Diego, a share of the 1990 Nobel Prize in economics. MPT is a recipe for allocating investments to different "pots" with the aim of making the most money consistent with a given appetite for risk.
Put simply, it does this by analysing financial data to assess the volatility of each asset, and also its link - "correlation" - with other assets. The aim is to create a well-diversified portfolio bringing the maximum return for a given level of risk.
It sounds wonderful, until one looks "under the hood" of MPT and finds the usual problems. Risk is conflated with uncertainty, and is modelled using Gaussian distributions which are undermined by extreme events. The same goes for MPT's attempts to model correlations.
That suggests that this Nobel-winning theory won't perform very well in real life - which is pretty much what recent studies have found.
So where does this leave us? Prof Gigerenzer says the evidence suggests it's hard to beat the simplest possible investment strategy - of just dividing money equally over different types of investments, such as stocks, bonds and cash.
The moral of all these studies is that forecasters and investment advisers routinely fall prey to the messy complexity of the real world. In the face of the unknowable future, there's safety in simplicity.
Robert Matthews is visiting reader in science at Aston University, Birmingham, England