Mark Zuckerberg and Elon Musk's debate over artificial intelligence: will robots go rogue?

Will the rise of the robot be our downfall? In the debate over AI, we ask if humans are ready for ‘The Singularity’

Robot with arm extended, computer illustration. Getty Images
Powered by automated translation

Science fiction has taught us, over many decades, to fear being destroyed by robots. There was a perfect example of this yesterday, as a rather humdrum news story from June about the development of Facebook’s chatbots suddenly exploded across the media. Breathless reporters told us that “panicking” Facebook engineers shut down the project when they discovered bots talking to each other in a language of their own invention.

“This is how it starts,” said one prophet of doom on Twitter, envisaging a scenario where robots agree among themselves to “Annihilate Earth”, but we don’t notice because we have no idea how to speak Robot.

In fact, all the bots had done was to sidestep the niceties of English grammar in order to understand each other better, which is something humans do all the time. “I can can I everything else” was one example of their linguistic invention, which sounds a little clumsy but isn’t as worrisome as something impenetrable like “X&ZPP29 4H27%V5”. Facebook’s engineers, aware that they were meant to be building a tool that enabled bots to communicate with humans, simply tweaked the settings to force them to stick to English sentence structures. Planetary disaster avoided.

This fear of what might happen when computers become more intelligent than us is a topic that bubbles up with increasing frequency as progress is made in the field of artificial intelligence (AI). Only last week, two billionaires from the world of tech, Facebook’s Mark Zuckerberg and SpaceX’s CEO Elon Musk, had a very public war of words over the dangers of unregulated AI experimentation.

Musk has voiced his concerns about this for many years, describing AI as a “fundamental risk to the existence of human civilisation” and donating large sums of money towards developing AI in a way he believes is safe. Zuckerberg, by contrast, is a shoulder-shrugging optimist; he employs an entire AI research team that’s supposedly focused on making our lives “better in the future”, and he criticised Musk (without mentioning him by name) for irresponsible doom-mongering. “If you’re arguing against AI,” he said, “then you’re arguing against safer cars… against being able to better [medically] diagnose people”. Musk chose to respond via Twitter: “His understanding of the subject is limited.”

_______________________

READ MORE:

Artificial intelligence is more of a secretary than a Skynet

Patients willing to put faith in AI and robots for surgical procedures, survey shows

Artificial intelligence, automation ‘will significantly change the way we work’

_______________________

It was, admittedly, something of a straw man argument from Zuckerberg. Musk isn’t trying to suppress the use of AI as a problem-solving tool; his concerns lie further down the line, and those concerns are shared by a number of scientists and futurologists. In a 2014 piece for The Huffington Post, Stephen Hawking warned of “complacency”, and asked how we might improve our chances of “avoiding the risks” associated with AI.

What are those risks, specifically? Musk talks of AGI, artificial general intelligence, where the intellect of a computer may at some point match or exceed that of human beings, a moment also known as “The Singularity”. There may come a point, he believes, where the machine becomes a superintelligent autonomous agent, able to redesign itself in a way we’re unable to understand. People seem unable to grasp this idea, he says, until they “see robots going down the street killing people”.

“If we’re lucky, they’ll treat us as pets,” says Paul Saffo, a consulting professor at Stanford University, “and if we’re very unlucky, they’ll treat us as food.” But there are academics, including Noam Chomsky, who don’t believe computers will ever be able to attain that level of intelligence. Yes, they might be able to learn to speak Chinese, but will they ever truly understand Chinese, or merely simulate understanding? These questions are all bound up with concepts of intelligence, sentience, self-awareness and consciousness, things that remain stubbornly impervious to scientific analysis. Zuckerberg’s angle on AI, shared by other industry figures such as Google’s Ray Kurzweil, is that super-smart micro-intelligences offer great benefit to mankind and will always remain under our ultimate control.

But AI machines are still capable of wreaking catastrophe without necessarily being self-aware; they would just need to be unsupervised and very good at their job. The most celebrated illustration of this was outlined by philosopher Nick Bostrom in 2003, and has become known as the Paperclip Maximizer; it describes an AGI that’s designed to create as many paperclips as possible, and through sheer monstrous efficiency it could theoretically repurpose the entire solar system as a paperclip manufacturing facility without being inherently “evil”. It’s a deliberately absurd image that Bostrom chose, but it makes the point very well; AI may not necessarily have the best interests of humans at heart.

_______________________

READ MORE:

Humanoid robot that can show emotions being developed in UAE

Ahmad Badr: Robots not a threat to our livelihoods

Microsoft’s AI Twitter bot turned off after it spouts racist and sexist tweets

_______________________

The argument between Musk and Zuckerberg is aligned with their own interests and goals. Zuckerberg could be characterised as an enthusiast in pursuit of “cool stuff”; he wants to use the power of AI to streamline our existences and bind us to the technology. He certainly doesn’t want prominent figures publicly questioning AI strategy; he needs Facebook users to be onside, happy with the benefits that AI brings and not inquiring too deeply into the advanced technology that’s purring away behind the scenes. Reassuring us that change is OK and progress is good is the entire modus operandi of many of tech’s biggest players, from Apple to Facebook to Google.

Musk’s concerns are short on specifics, but he clearly fears a world where humans lose their individual agency. What’s far from clear is the timescale involved; while Kurzweil pegs the arrival of The Singularity at 2045, others doubt it will ever arrive. Andrew Moore, dean of the computer science school at Carnegie Mellon University, describes worrying about superintelligent computers as “a tragic waste of time”.

It’s a compelling topic because it’s impossible for humans to precisely define the problem. We’re being asked to ponder the nature of an intellect that exceeds our own. Never in human history have we had to consider what the priorities might be of an entity not just cleverer than one of us, but cleverer than all of us put together.

We can’t imagine it, because by definition we’re too stupid. This is a tough realisation; little wonder that it makes some people so very uneasy.