Can the UAE's excitement for artificial intelligence overcome human nature?

The mistrust of AI by the general public has put a stopper in development in the past, writes Robert Matthews

BARCELONA, SPAIN - FEBRUARY 24:  Visitors attend a demonstration of a robot named 'Watson' at the IBM stand on  day 3 of the Mobile World Congress 2016 on February 24, 2016 in Barcelona, Spain.The annual Mobile World Congress hosts some of the world's largest communications companies, with many unveiling their latest phones and wearables gadgets.  (Photo by David Ramos/Getty Images)
Powered by automated translation

For those lucky enough to get in, the UAE AI Summer Camp that starts on Sunday and runs through the summer may well prove a transformative experience.

Funded by the Ministry of State for Artificial Intelligence Office and with speakers from the likes of Microsoft and IBM, and aimed at school and university students and government executives, the Camp sold out in 24 hrs - and small wonder. Attendees will get access to cutting-edge tech and be able to build systems like AI chatbots that converse with humans.

As someone who began working on AI systems more than 25 years ago, I understand the excitement of getting computers to mimic brain-like abilities, however crudely.

But I also know that AI enthusiasts are prone to overlooking the single biggest obstacle to the adoption of the technology: human nature.  

Time and again, the reaction of humans to AI has hobbled its advance, or even stopped it dead.

________________

Read more:

Artificial intelligence to add $182 billion to UAE's economy by 2035

Dubai's Museum of the Future looks to AI guides and wearable tech

Life in 2028: how advances in AI could change our lives for the better - and worse

________________

I speak from experience. Back in the early 1990s, I created an AI that could recognise the literary style and distinguish between authors.

Working with an expert on writing styles, I programmed a computer to behave like a so-called neural network, now one of the most widely-used forms of AI.

Neural networks are good at mimicking the brain’s ability to spot patterns in a mass of noisy data, like spotting a familiar face among a crowd of strangers.

Put simply, a neural network does this using mathematical recipes – algorithms – to learn how to label and categorise data. Then when presented with data it hasn’t seen before, it uses its training to put it in one of the various categories.

For our project, we trained a neural network to recognise the writing style of Shakespeare and some of his contemporaries using lots of samples of their work.

When we then showed it a new sample of text, the neural network proved remarkably good at identifying the right author.

_______________

A Cranleigh Abu Dhabi pupil uses a robot to steady his hand during a new trial to improve handwriting. Courtesy: Cranleigh
A Cranleigh Abu Dhabi pupil uses a robot to steady his hand during a new trial to improve handwriting. Courtesy: Cranleigh

Read more: Abu Dhabi pupils first to test robotic arm to improve 'iPad generation' handwriting

_______________

Our aim was to use the neural network to investigate a number of controversies about Shakespeare’s career. We found evidence that some of the early plays by Shakespeare had a writing style strikingly similar to that of his famous rival Christopher Marlowe.

Claims that Shakespeare used unpublished scripts by his contemporary were circulating even during his lifetime, and are still taken seriously by literary scholars.

Yet when we published our work, we ran into a problem. People are suspicious of computers with human-like expertise.

Critics argued that we had not given a clear explanation of why the computer had made the decisions it had. And they had a point. To this day, neural networks are notorious for being something of a black box: it’s very hard to tell why they make the decisions they do.  

But then, that’s a criticism that could also be levelled at humans. You may recognise Shakespearean prose when you hear it – but can you say precisely how or why?

Decades later, our findings are still being debated, but chiefly by computer scientists working on similar problems. Literary scholars, in contrast, remain suspicious of attempts to capture their skill using AI.

In that, they’re hardly unique. Despite all the excitement over driverless cars, polls have repeatedly shown that most consumers don’t trust the AI technology behind it.   

Indeed, evidence suggests their suspicion is actually increasing – driven perhaps by the mounting reports of AI-related accidents.

AI enthusiasts like to comfort themselves by arguing that the power of the technology will eventually win over the doubters and the fearful.

But that’s just what they said in the 1980s when AI went through its last “golden age”.

Back then the big buzz-phrase was a form of AI called expert systems.

Put simply, these involve picking the brains of human experts in, say, medical diagnosis, and capturing their thought processes in ways computers can execute.

At first, there was huge excitement about the potential of expert systems – and media coverage. Many organisations found themselves caught up in the hype and investing heavily in the technology. But they too ran into the problem of people being reluctant to cede power to AI.

Doctors, in particular, resented having their professional opinions called into question by a machine – even if its diagnoses were probably more reliable.

Fast forward 30-plus years and AI is again riding high. But the problems remain the same. AI is still not living up to the hype, and it still generates suspicion.

IBM, one of the sponsors of the Summer Camp, knows this better than most.

Its Watson AI system has become a poster-child for the capabilities of AI since 2011 when it beat human champions on the US TV quiz show Jeopardy. But since then the sheen has started to fade.

In 2013, IBM teamed up with the Memorial Sloan-Kettering Cancer Centre in New York to help in the fight against cancer.

Watson was trained by clinicians and other experts to interpret and understand cancer data, medical trial results and other research.

Initially, the company talked of a “new era” in cancer research and treatment. But then reports emerged of Watson struggling with the sheer complexity of the tasks mastered by human experts. The co-operation with the cancer centre has since ended, and the IBM Watson team is facing cutbacks.

Industry commentators talk of the marketing running way ahead of Watson’s abilities. In other words, the same issue that led to the collapse of the AI bubble in the 1980s.

There are lessons here for those at the Summer Camp hoping to be part of the AI revolution. First, don’t be seduced by the apparent power of the technology. And second, never underestimate the power of hype to turn a revolution into a bubble.  

Robert Matthews is Visiting Professor of Science at Aston University, Birmingham, UK