Artificial Intelligence: No longer the stuff of science fiction

Writers have long toyed with the idea of a robot revolution but the very real limitations of artificial intelligence have made this an impossibility. However, when Professor Stephen Hawking warns about the threat to man posed by technology, it's time to assess the arguments.

A ‘robot’ activist for the Campaign to Stop Killer Robots. Getty Images
Powered by automated translation

It was supposed to be a straightforward good-news PR interview, set up by Intel before Christmas to showcase the new software it had designed for Stephen Hawking, the theoretical physicist locked in by motor neurone disease and utterly reliant on computers for communicating with the outside world.

The 72-year-old former Lucasian Professor of Mathematics at Cambridge University, author of A Brief History of Time and widely regarded as one of the smartest human beings alive, finally had at his disposal a truly sophisticated adaptive word-prediction software to assist his "speech".

But ACAT, Hawking’s new Assistive Contextually Aware Toolkit, had got one of the greatest minds of our time thinking about the implications of artificial intelligence, and the televised interview with Rory Cellan-Jones, the BBC’s technology correspondent, was about to go off-piste.

The new software, plumbed in to just about everything Hawking had ever written, said or published, was so intuitive that context alone was sufficient for it to guess what his next words would be. ACAT was, in short, putting words into Hawking’s mouth.

Midway through a jocular inquiry by Cellan-Jones about why, despite the new software, he had decided to stick with his clunky robotic voice, Hawking suddenly delivered an unscheduled apocalyptic warning about the perils of artificial intelligence – a message rendered even more ominous by the expressionless professor’s familiar robotic voice.

It was a scene that would not have looked out of place in a film from the Terminator franchise, in which Skynet, a man-made ­artificial-intelligence system, achieves autonomy and decides to wipe out the now redundant humans who created it.

“The primitive forms of artificial intelligence we already have, have proved very useful,” Hawking began. “But I think the development of full artificial intelligence could spell the end of the human race.”

There was a curiously long pause as Hawking, and his new software, appeared to choose his words carefully.

“Once humans develop artificial intelligence,” Hawking continued eventually, “it would take off on its own and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”

All that was missing was Arnold Schwarzenegger’s cyborg, sent back in time to silence the ­professor and his timely warning.

Hawking isn’t the first person, real or fictional, to raise concerns that, in creating machines capable of outsmarting us, we are programming our own destruction.

In 1921, the Czech playwright Karel Capek staged a play that predicted the fall of mankind at the hands of a race of cyborg slaves who, learning to think for themselves, turn against their human creators. Rossum's ­Universal Robots also gave the world the word robot, taken from robota, the Slovak word for menial work.

Since then, conjured up by ­existential angst at the hubristic nature of our self-satisfied cleverness, an army of sinister robots and disembodied entities imbued with artificial intelligence and bent on mankind’s destruction has poured forth from the pens of science-fiction writers.

HAL 9000, the disembodied computer star of Arthur C Clarke's 1968 film 2001: A Space Odyssey, is a classic of the genre. In charge of systems on the spaceship ­Discovery One, HAL is capable of autonomous reasoning, a blessing that quickly mutates into a curse when his human companions suspect he is malfunctioning and try to switch him off. HAL, logical to a fault, interprets this as a threat to his mission and sets about killing the crew.

In the real world, however, genuine artificial intelligence – the ability of a computer to recreate or simulate human intelligence, with all its vagaries – has proved somewhat elusive.

The term was coined in 1955 by John McCarthy, then a mathematician at Dartmouth College in New Hampshire. In 1956, McCarthy organised a conference to ­explore the theory that “every aspect of ... intelligence can in principle be so precisely described that a machine can be made to simulate it”.

McCarthy died in 2011, at the age of 84, after a lifetime devoted to the unfulfilled notion. In the way of the modern digital world, he left behind much of himself as a virtual presence on a website, strangely chaotic for a computer scientist and untended since his last entry, but still echoing with his thoughts and work.

And what is clear from this now long-untended garden of ideas is that most of the blooms languishing in it remain precisely that – only ideas.

Almost 60 years after McCarthy’s vision of artificial intelligence, and despite the enthusiastic endorsement of the concept by fiction writers, there is still no such thing as a ­general- ­purpose robot, equipped with genuine artificial intelligence, capable of understanding, navigating and interacting with the world and its other inhabitants in an entirely autonomous manner.

Apple’s Siri, apparently understanding the questions we speak into our iPhones and responding with relevant information or actions, is merely working from a list of recognised words and actions and is not even a step down the right road.

Likewise, Google’s new driverless car, unveiled last week and due to be menacing humans on the streets of California in the new year, will merely be reacting to information from sensors and in accord with the programmed imperatives of an algorithm.

Nevertheless, there remains no shortage of scientists plodding along in the fictional footsteps of Miles Dyson, the Cyberdyne ­Systems engineer who creates the Skynet software that takes over the world in the Terminator film franchise, and dreaming of the day when machines will outsmart humans.

Kevin Warwick, professor of ­cybernetics at Reading University in the United Kingdom, is one of them. In his 1997 book March of the Machines, Warwick declared not only that it was possible that machines more intelligent than us would one day take over, but that it could happen in the next 10 or 20 years.

“The future points to machines which can evolve into better, even more intelligent machines and which can replace any parts that become faulty,” he enthused. “In this way machines could become immortal.”

In 2009, undeterred by the ­failure of his prediction to come true, Warwick appeared in the documentary Transcendent Man, celebrating the "the life and ­ideas" of Ray Kurzweil, a computer scientist, futurist, author and, since 2012, the head of engineering at Google, where he is charged with developing machine ­intelligence.

For Kurzweil, a cult figure to bot-heads everywhere, the buzz word is “singularity” – not the singularity of the Big Bang, as posited by Hawking and others, but “an era in which our intelligence will become increasingly nonbiological and trillions of times more powerful than it is today”.

In short, as his 2005 book, The Singularity is Near, made clear, Kurzweil can't wait for the day when humans will mingle their DNA with the software of computers to achieve immortality – and that day, he claimed at a conference of like-minded "singularitarian immortalists" in 2013, was a mere 30 years away.

The problem with all the excitement stirred by the pronouncements of Hawking, Kurzweil and others, says Mark Bishop, professor of cognitive computing at Goldsmiths, University of London, is that it all hangs on the single assumption “that every ­aspect of human mentality can be instantiated by a computer program”.

If that is true, he tells me, “then I think that Hawking, Kurzweil and Warwick are correct, that there will be a time at which machine intelligence is better than human on all aspects than we can imagine.”

There are, however, “at least three good foundational reasons for doubting whether some critical ­aspects of human mentality can be engineered by the execution of a computer program”.

One, no purely computational system can ever understand the symbols that it ­manipulates.

Two, for a machine to develop truly humanlike intelligence, it must possess consciousness – an impossible fantasy, as Bishop argued in a 2002 paper, Dancing with Pixies, and as likely as consciousness being located in “the cup of coffee I’m drinking or the chair I’m sitting on”.

Scientists, he says, “don’t want to think there are pixies lurking in their cup of coffee, and so we are drawn to reject the idea that the execution of a computer program brings forth consciousness”.

Three, as the physicist Roger Penrose pointed out in his 1989 book, The Emperor's New Mind: "There is something about mathematical insight – the 'Aha!' moment – that is fundamentally non-computable," says Bishop.

He concludes that “we’ve got pretty strong grounds for thinking there will always be a gap between what computers can do and what computers plus humans can do”.

This, pretty much, was the conclusion drawn in 1973 by James Lighthill, Hawking’s immediate predecessor as Lucasian Professor of Applied Mathematics at Cambridge.

Lighthill had been asked by the British government to evaluate the state of AI research and in his subsequent report, Artificial Intelligence: A General Survey, concluded that the idea of "a general-­purpose robot ... that could substitute for a human being over a wide range of human activities" was a mere "mirage".

If the results of last year’s annual Loebner Prize are anything to go by, it remains no less of a mirage 40 years on.

The prize, run by the Society for the Study of Artificial Intelligence and Simulation of Behaviour, is one of the oldest competitions to test a computer’s ability to match or mimic human behaviour.

Judges use computer terminals to interact with the unseen competitors, each one comprises two entities – one human, the other an AI system. After interrogation, the judge must decide which of the two is human.

Some of the answers given by “Rose” – which, with a score of 89.17 per cent, was last year’s winning system – will be depressingly familiar to users of Siri, Google and other voice-controlled bots. In response to the query: “What’s the weather like where you are?”, Rose offered: “I’m an American woman.”

Yet Rose was positively superhuman compared to Masha, in 19th position with a mere 35 per cent. To the same query about the weather, it replied: “You will not believe, but as a child I was in love with Jim di Griza. Garrison famously created his image. I always dreamed that one day he will take me with this wretched planet.”

“Take me to your leader”, it ain’t. Hardly the small talk one might expect from an entity poised to take over the world.

But that doesn’t mean we should relax, says Bishop. Hawking is right to be afraid, but for the wrong reason.

“We don’t have to imagine the scenario that Hawking, Warwick and Kurzweil paint, of superhuman intelligent robots, to be scared of what robots might do to humanity,” he says.

“There is every reason to be alarmed about autonomous weapons, for example, without thinking that the AI in them is cleverer than humans. I call it ­Artificial Stupidity.”

While we remain relatively hopeless at producing general-­purpose, all-thinking, all-dancing robots, we are becoming frighteningly good at creating semi-­autonomous entities, which, once “briefed” by human programming, are allowed to go off and pursue their dedicated function, often with unforeseen consequences.

Artificial Stupidity was demonstrated perfectly by the strange case of the US$23-million textbook. No one, not even the author Peter Lawrence, could have expected The Making of a Fly, his dry, out-of-print 1992 academic treatise on genetic evolution, to become the most valuable book in the world in 2011.

But that was to reckon without the automated pricing bots that are increasingly being used online by Amazon book dealers. Programmed to trawl the web for rival offerings according to predetermined marketing strategies, they will match, or marginally exceed or undercut the prices.

In this case, two bots with incompatible agendas locked horns and, unnoticed by their human “masters”, between them automatically bumped up the price to an absurd level. On one book dealer’s Amazon page, The Making of a Fly peaked at $23,698,655.93 (plus $3.99 for shipping).

The bots were using algorithms “that didn’t have a built-in sanity check on the prices they produced”, noted the Berkeley biologist and blogger Michael Eisen, one of whose students had spotted the pricing arms race. There were, he concluded, “seemingly endless possibilities for both chaos and mischief”.

It is in the light of this, says ­Bishop, that we should consider the army of robot sentinels, armed with machine guns and grenade launchers and capable of detecting and killing human intruders from more than three kilometres away, that is gathered on the southern side of Korea’s demilitarised zone.

So far, he says, humans have been left in the decision loop of the deadly Samsung SGR-A1, “but there’s purely a software switch that can be flipped any time to make these robots go into fully ­autonomous mode”.

Last month, a study by researchers at Darmstadt University of Technology in Germany concluded that attempts to programme “values and principles of conduct, such as the Geneva Convention” into lethal autonomous weapons would fail, precisely because their intelligence was not human and lacked the ability to make ­decisions based on human moral values.

Faced with a moral quandary – kill a long-wanted terrorist, saving an unknown number of lives in the future, or spare him and the innocent children in the kill zone? – an algorithm would just hazard a guess.

In the paper Logical Limitations to Machine Ethics with ­Consequences to Lethal Autonomous Weapons, the authors highlight Israel's use of the Guardium, an unmanned and potentially entirely autonomous armed vehicle that has been prowling Israel's border with Gaza for the past three years.

The armoured Guardium is not only equipped with cameras and a range of sensors to allow it to detect incursions, but is armed with lethal and non-lethal weapons and programmed to respond accordingly to threats.

So far, remotely monitored and controlled by humans, it hasn’t been left to its own devices – but the point, say the authors, is that it could be, with the press of a key. Such Unmanned Ground Vehicles take the concept of remotely controlled drones “one step further and aim to make human agency fully redundant in the control loop”.

But human agency is already fully redundant on board the X-47B Unmanned Combat Air System.

America’s contemporary ­unmanned armed drones, ­responsible for the deaths of many innocent bystanders as well as their intended targets, are not autonomous – they are flown by remotely located human pilots. But for the past seven years, Northrop Grumman and the US navy have been developing the strike-­fighter-sized X-47B, an entirely autonomous unmanned drone, with no human operators in the control loop, capable of ­carrying 2,000 kilograms of bombs or missiles.

“It isn’t very often you get a glimpse of the future,” said Ray Mabu, the secretary of the navy, after witnessing the historic first landing by the aircraft on the USS George HW Bush in July 2013. Such aircraft, he said, “have the opportunity to radically change the way presence and combat power are delivered from our aircraft carriers.”

Others, however, have expressed reservations about who would be accountable for the drone’s strike capabilities.

“Lethal actions should have a clear chain of accountability,” Noel Sharkey, professor of ­artificial intelligence and ­robotics at the UK’s University of ­Sheffield, told the Los Angeles Times in 2012. “This is difficult with a ­robot weapon. The robot cannot be held accountable. So is it the ­commander who used it? The ­politician who authorised it?”

If the Campaign to Stop Killer Robots has anything to do with it, the days of Guardium and ­other such weapons could soon be numbered.

An international ­coalition of nine NGOs launched in October 2013 with the objective of pre-emptively banning all autonomous weapons, the ­campaign has already been successful in forcing the issue onto the agenda of the UN, which in April this year will hold a dedicated meeting of experts under the umbrella of its Convention on Conventional Weapons, adopted in 1980 and signed by 115 nations, including the UAE.

The UN’s under-secretary-­general, Michael Møller, has urged delegates to “take bold action ... you have the opportunity to take pre-emptive action and ensure that the ultimate decision to end life remains firmly under human control”.

Unsurprisingly, perhaps, the military-industrial complex evoked by president Eisenhower in his 1961 farewell speech, in which he warned of “the potential for the disastrous rise of misplaced power”, takes a less clearly defined moral stance on the issue.

In 2006, five years before he died, the 79-year-old McCarthy attended a conference at Dartmouth College to mark the 50th anniversary of his original symposium on artificial intelligence.

The event was paid for with a $200,000 (Dh735,000) grant from Darpa, the US government’s Defense Advanced Research Projects Agency, on the condition that the participants “focus on US defense and homeland security needs”.

The agency’s mission is “creating and preventing strategic ­surprise”, and autonomous weapon systems are high on its agenda.

Take one of its current projects, the Anti-submarine Warfare Continuous Trail UnManned Vessel – an autonomous sub­marine hunter designed “under the premise that a human is never intended to step aboard at any point in its operating cycle”.

Substitute “Cyberdyne” for “Darpa”, and you don’t have to be Stephen Hawking, or even ­Terminator-killing resistance leader John Connor, to see where all this might be heading.

Jonathan Gornall is a regular ­contributor to The National.

thereview@thenational.ae