Artificial intelligence will enhance us, not replace us

As the UK considers the role of government in AI, how will it impact job security? asks professor Henrietta Moore

A SoftBank Group Corp. Pepper humanoid robot, center, stands in front of a Boston Dynamics Inc. Atlas humanoid robot, left, and Spot robot as they are displayed at the SoftBank Robot World 2017 in Tokyo, Japan, on Tuesday, Nov. 21, 2017. SoftBank Chief Executive Officer Masayoshi Son has put money into robots, artificial intelligence, microchips and satellites, sketching a vision of the future where a trillion devices are connected to the internet and technology is integrated into humans. Photographer: Kiyoshi Ota/Bloomberg
Powered by automated translation

In his 1990 book The Age of Intelligent Machines, the American computer scientist and futurist Ray Kurzweil made an astonishing prediction. Working at the Massachusetts Institute of Technology (MIT) throughout the 1970s and 1980s and having seen firsthand the remarkable advances in artificial intelligence pioneered there by Marvin Minsky and others, he forecast that a computer would pass the Turing test – the test of a machine's ability to match or be indistinguishable from human intelligence – between 2020 and 2050.

Kurzweil, now Google’s head of artificial intelligence, or AI (an acronym with which we’ve all now become familiar), has subsequently refined his claim. He now says this event will happen by 2029. What’s more, in 2045 we will witness what he calls "the singularity" – the point at which human and artificial intelligences merge, leading to exponential advances in technology and human capabilities.

Exciting stuff. Or is it? While Kurzweil is famously optimistic about the effect AI will have on human lives, others aren’t so sure. Part of this stems from the fear, inculcated by a thousand sci-fi movies, that “the robots will take over” – either rendering humans functionally useless or worse, becoming our masters in a dystopian role reversal.

The truth may lie somewhere in between. But what’s increasingly clear is that AI is advancing at a rapid pace. Already, it is posing profound questions about the future of work, of society and the very nature of what it means to be human.

We can get a sense for this from current innovations. Whether it’s self-driving vehicles, devices like Amazon’s Echo that can "understand" human language, or the intelligent crunching of vast medical datasets to diagnose disease more accurately, we’re moving towards a place where all manner of tasks are automated and human error – or perhaps human judgment – is obviated.

A recent report by the global consultancy McKinsey estimated that almost half (49 per cent) of the activities people are paid nearly $16 trillion in wages to do in the global economy have the potential to be automated by adapting currently demonstrated technologies.

This month I had the chance to discuss some of these implications before a House of Lords select committee in the UK's Parliament. One issue the House of Lords committee is considering is whether to recommend the appointment of a new minister for AI to provide a coordinated response to these shifting sands across different government departments.

In this, the UK would be playing catch-up to the UAE, which, in a world-first, recently appointed 27-year-old Omar bin Sultan Al Olama to this role. Speaking to The National recently, Mr Al Olama set out his positive, practical vision for AI, saying it could offer humanity a "quick win" in helping to tackle climate change and other pressing problems.

While the UAE might be famous abroad for its glittering, futuristic cities of Dubai and Abu Dhabi, it is also situated in one of the world's most ecologically sensitive regions and has an acute economic need to move away from fossil fuels as the main source of its wealth – so government-led nurturing of AI's potential for sustainable ends makes a lot of sense.

Nor is the UAE alone in wanting to reap the economic benefits of AI. Indeed, across the world, it’s no exaggeration to say that something of an AI arms race has begun. China recently announced its intention to dominate the sector, creating a $150 billion industry by 2030, in direct competition with the US. China’s output of academic papers on artificial intelligence overtook the 28 EU countries combined for the first time last year.

Of course, we all hope that technological advances will continue to happen and will continue, on balance, to benefit humanity. But there's little doubt there will be some unintended consequences, some of which are already manifesting themselves. One thing I'm particularly concerned about is the impact on job security. Sitting next to me in the House of Lords committee meeting was Olly Buston, the chief executive of the think-tank Future Advocacy, which recently published a report estimating between 22 per cent and 39 per cent of jobs in the UK are at high risk from automation by the early 2030s. With an average of about 30 per cent across the country, that represents more than 10 million livelihoods.

In London, where I work, there have been controversies recently over the way new technology-driven companies like Uber, a taxi service, and Deliveroo, a takeaway food delivery company, treat their workers. Because their businesses are built around advanced data services and smartphone apps, they have access to vast numbers of potential customers and willing recruits at the push of a button. But they have chosen, somewhat cynically, to class the people who work for them as self-employed rather than as employees. In the UK, this means they have no automatic right to sick pay, holiday or pension contributions and no opportunity for career progression.

Replicated at scale across an economy, you can imagine the effect this could have. While many workers will see their job security and welfare safety net vanish, those in control of the technology stand to benefit enormously. The question then becomes: how can we change the social contract so that we don’t just see runaway inequality and wealth polarisation? How can we make sure the increases in productivity and value-generation AI promises benefit all of society, instead of forcing millions into a precarious, hand-to-mouth existence that leaves little room for personal flourishing?

Recently, the Institute for Global Prosperity, of which I am director, put forward a radical new proposal that could help address one aspect of this challenge. Known as universal basic services, this would see many of the essentials of 21st century life – including housing, food, transport and information technologies – provided free at the point of need. This is a familiar concept in the UK, where our National Health Service has been providing needs-based healthcare for all, for the past 70 years. We have calculated that an extension of provision into these other areas could be afforded at a cost equivalent to about 2.3 per cent of the UK’s GDP. The practical effect of this would be to dramatically reduce the basic cost of living for most people, giving them greater freedom over their work and leisure choices.

We’re not saying this would be a panacea. But compared to other ideas like a universal basic income – a flat payment to all citizens – it’s far more affordable and could be one measure that helps to alleviate the worst impacts of inequality brought about by an increasingly tech-driven world.

This needs to go hand-in-hand with a much more detailed analysis of how we can upskill our populations through education to make the most of AI’s positive potential. Kurzweil said recently that AI will enhance us, not replace us. His predictions have often been right – but that doesn’t mean we shouldn’t start planning now for this radically changed future.

Professor Henrietta Moore is director of the Institute for Global Prosperity at University College London, where she is chair of culture, philosophy and design