As 2001: A Space Odyssey turns 50, the EU is cracking down on real-life HALs

New EU laws could mean big fines for AI that can't explain its thought process

Still from 2001: A Space Odyssey, one of a number of films made about a futuristic universe. Courtesy Metro-Goldwyn-Mayer
Powered by automated translation

When 2001: A Space Odyssey opened in movie theatres 50 years ago this month, it left audiences baffled. Its mind-bending special effects, enigmatic ending and main character – HAL, a psychotic computer – were too much to cope with. Over 200 people walked out of the first showing, and critics panned it.

Now, half a century on, 2001 is widely regarded as the greatest science fiction movie ever – and one of its major themes is about to become a real-life legal issue of global proportions. When the European Union's General Data Protection Regulations becomes law next month, companies that use computer algorithms to make big decisions about, say, giving someone financial credit will have to be able to explain why the decision was made.

European citizens will, in short, have the right to insist that companies open the "black box" and show why their algorithm turned them down. Yet in this case, the black box is a neural network and there’s currently no hope of meeting that demand.

A network of brain cells

Let's use the example of HAL – in the film, "he" is the artificial intelligence looking after the first manned space mission to Jupiter. On their journey, the crew start to suspect HAL is malfunctioning in some way – the suspicions are confirmed by mission control back on Earth.

The astronauts are instructed to disconnect HAL’s higher functions. But before they can do so, though, HAL takes action, with murderous consequences. Movie-goers never get an explanation of this behaviour, which is all the more perplexing given the computer’s stated infallibility.

While a novel by Arthur C Clarke, which came out shortly after the movie, claimed HAL had fallen prey to guilt, which was caused by the clash between "his" role as a crew member and his secret orders not to tell his colleagues the true goal of the mission. But HAL is a so-called neural network computer, a real-life type of AI and this sort of computer is programmed to act like a network of brain cells, solving problems by learning from examples.

While they have proved incredibly good at a host of tasks from facial recognition to data-mining, the technology has a big drawback: it’s impossible to be sure how a neural network comes to its decisions. And under the new measures, companies who can’t explain just this will face punitive fines.

No amount of analysis of it will reveal what’s going on inside the black box because, like a real brain, a neural network gives no clues about the actual thought-processes driving any act.

How does AI explain itself?

Under the law, a slew of current practices, like burying consent statements in small-print and pre-ticked opt-in boxes, will also no longer be lawful. And failure to comply is backed by colossal financial penalties, up to a maximum of Euro 20 million or 4 per cent of annual turnover, whichever is the larger.

Computer scientists have wrestled with this so-called explainability problem for decades, with little success. Back in the 1970s, so-called expert systems seemed to offer a solution as they were based on hard rules of logic whose conclusions could be analysed.

But during the 1990s the limited abilities and sheer effort of creating expert systems led to their losing out to neural networks, which needed only off-the-shelf computer programs plus training data with which to teach them.

For years, the sheer power of neural networks has attracted interest from companies in fields from finance and medicine to security. Now they must face the fact that, deep down, they’ve no idea what their black box are doing and there is little sign of this changing any time soon.

______________

Read more

500 Emirati men and women are first to be trained in batch of AI specialists

AI will complement jobs not destroy them

______________

One major hurdle is a phenomenon discovered by computer scientists where the better an AI is at its task, the less explicable its decisions become. The US Defense Advanced Research Projects Agency is currently funding a series of research programmes to close the gap. The motivation is simple: without being able to explain itself, AI is never going to be fully trusted by its human creators.

For those now using AI to make money, the hope must be that computer scientists find ways of keeping them on the right side of the law.

After half a century, 2001: A Space Odyssey is still remembered for its stunning vision of the power of technology. But perhaps its greatest legacy is its warning about knowing just what is going on inside the likes of HAL.

Robert Matthews is Visiting Professor of Science at Aston University, Birmingham, UK