What Socrates might have to say about artificial intelligence and the moral dilemmas behind its military use

Should experts in AI and machine learning think twice if the products of their labour are deployed for defence purposes? asks Olivier Oullier

Attendees look at MediaPad tablet devices on display during a Huawei Technologies Co. launch event ahead of the Mobile World Congress (MWC) in Barcelona, Spain, on Sunday, Feb. 25, 2018. At the wireless industry’s biggest conference, more than 100,000 people are set to see the latest smartphones, artificial intelligence devices and autonomous drones exhibited by roughly 2,300 companies. Photographer: Simon Dawson/Bloomberg
Powered by automated translation

In Plato's Republic, Socrates challenges the idea that justice should boil down to telling the truth and returning things that were taken. He raises an interesting dilemma: what if the goods in question would harm other individuals if returned?

In other words, giving a weapon back to its owner constitutes a moral dilemma as it could lead to people being harmed by the weapon's use. For Socrates, there was no doubt that protecting people was what should always prevail.

Centuries later, Socrates’s view is still very relevant in light of recent developments regarding the use of artificial intelligence in warfare. From autonomous weapons and robot soldiers to terabytes of video captured by drones, many countries including China, Russia and the US are leveraging AI to improve their military strategies and operations.

Despite the tremendous budgets at stake and the need for countries to be able to defend themselves, should researchers and engineers specialising in artificial neural networks and machine and deep learning allow the products of their labour to be used to kill people, even if they could also be used to save lives?

With the advent of the Fourth Industrial Revolution and AI, more leaders in the public and private sector are working with philosophers, psychologists and neuroscientists to better understand how people deal with such dilemmas and make moral judgments and decisions.

Seventeen years ago, Joshua Greene and his colleagues at Princeton University published a seminal article in Science Magazine, in which neurotechnology was employed to better understand how the brains of individuals function when making decisions while facing moral dilemmas. The team of neuroscientists found that depending on how emotionally engaged people are, their judgment about what to do will vary.

For instance, in one well-known example in which one is forced to choose whether to sacrifice one life to save five, having to press a button to achieve it remotely or having physically push someone significantly changed the reactions in the brains of participants in the study, despite the outcome being the same. These findings are currently being used to help autonomous vehicles make decisions. But the neuroscience of ethical decision-making can resonate far beyond the automotive industry.

_________________________

Read more from Olivier Oullier:

_________________________

Thousands of Google employees expressed concern earlier this year when they found out that their company was involved in Project Maven, the nickname for the US Department of Defence’s algorithmic warfare cross-functional team, established in 2017. This project is part of the DOD’s massive effort to leverage AI to improve the efficacy of US military operations, starting with the analysis of information and footage recorded by drones.

According to The Wall Street Journal, in 2017 alone the DOD spent $7.4 billion on AI-related projects.

Thousands of Google employees signed an open letter to the company’s chief executive Sundar Pichai, stating that: "Building this technology to assist the US government in military surveillance – and potentially lethal outcomes – is not acceptable".

A dozen employees even resigned from their jobs. Soon after, Google announced it would not renew its partnership with the DOD on Project Maven once the current contract comes to an end in March 2019.

But this move will not stop the AI efforts and investments of the department, which in July announced a $885 million, five-year contract with Booz Allen Hamilton to be able to use large-scale AI systems. Many similar partnerships are to be expected in the near future, all over the world.

In light of the tremendous investments in the public and the private sectors and the increasing number of projects globally to leverage AI in warfare, some scientists and business executives have decided to voice their dissent.

More than 2,000 of them signed a pledge launched last month, supported by the Future of Life Institute, demanding that governments introduce strong international norms and laws against lethal autonomous weapons. In the absence of laws, they wrote, "we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons".

Elon Musk of Space-X and Neuralink and Demis Hassabis of Google DeepMind, as well as several Nobel laureates, are among the signatories of the pledge. In addition, 26 United Nations countries have "explicitly endorsed the call for a ban on lethal autonomous weapons systems".

In a report entitled Values, ethics and innovation: rethinking technological development in the fourth industrial revolution, published earlier this month, the World Economic Forum offers operational solutions to put values and ethics at the heart of technological and societal development.

Very wisely, its authors urge all stakeholders involved, including governments and citizens, to not lose sight of what technological development should be about: social progress and the wellbeing of humanity. It is a message that Socrates himself might have endorsed.

Professor Olivier Oullier is the president of Emotiv, a neuroscientist and a DJ