Globally accepted rules needed to optimise AI potential
Exclusive: AI could potentially deliver additional economic output of around $13tn globally by 2030, says McKinsey
A universal set of rules, which are unbiased and ensure legitimate deployment of governance in artificial intelligence (AI), are required to achieve future goals, said the top executive of the US-based consultancy McKinsey.
“To some level we do have a universal set of principles in the abstract of the Declaration of Human Rights and in other documents of the United Nations … they have [not yet] been tailored and updated to reflect AI-specific set of concerns,” Jonathan Woetzel, director of McKinsey Global Institute and senior partner at the company, told The National.
“But I think we will get there … problems that AI address are global, they are not confined to one nation.”
According to McKinsey’s research, AI holds immense economic potential globally. It could deliver additional economic output of around $13 trillion by 2030, boosting global gross domestic product (GDP) by nearly 1.2 per cent a year, said the consultancy in a September 2018 report.
“AI applications can do anything from ordering a pizza to nuclear warfare … we need to assess the risk as the starting point,” said Mr Woetzel.
AI governing principles should be clear and unbiased, while decisions regarding its use need to be evidence-based and consistent over the time, said Mr Woetzel, who was in Dubai to attend the Global Governance of AI Roundtable (GGAR) at the World Government Summit 2019.
GGAR, which is developing a multi-layered AI governance framework, has started dialogue on the pathways to the development of effective, culturally adaptable norms.
“One of the aims of GGAR is to build a stakeholder guidebook for safe and ethical AI,” said Mr Woetzel.
“Although we have many AI codes, none have been accepted as being the global community's AI code. We are working towards that ... but I argue that it should not change the principles under which the global community is trying to define what is and isn’t fair.”
Presence of developed infrastructure, easy access to technology and committed leadership to develop the technology further are some of the prerequisites for the development of AI, stated Mr Woetzel, adding, “the UAE has these [factors] in abundance. I expect that the UAE will be a leader in the deployment of AI and in its development as well.”
The UAE economy, the Arabian Gulf region’s second-largest, is forecast to gain most from the adoption of AI, according to a report by PwC, with the technology contributing up to 13.6 per cent to the country’s GDP – equivalent to Dh352.5 billion – by 2030.
The Emirates will be followed by Saudi Arabia, where AI is expected to contribute 12.4 per cent to GDP, and 8.2 per cent in the GCC-4 – Bahrain, Kuwait, Oman and Qatar. By comparison, the contribution of AI to China’s GDP will be 26.1 per cent and 14.5 per cent in North America by 2030, respectively, PwC said.
Mr Woetzel said there will be global "sandboxes" for AI that will depend up on the practical uses such as sharing of data to establish universal trends before taking crucial decisions.” A sandbox is a type of software testing environment that enables the isolated execution of software or programs for independent evaluation, monitoring or testing.
“That [sandbox] is often used in areas like mobility, autonomous vehicles, health care – practical approaches that carefully monitor the new technologies in real-world conditions.
Updated: February 12, 2019 03:55 PM