Google CEO calls for aligned AI regulation as US and EU plans diverge

Artificial Intelligence is 'too important' to ignore its regulation, Sundar Pichai wrote

Sundar Pichai, chief executive officer of Alphabet Inc., departs after a discussion on artificial intelligence at the Bruegel European economic think tank in Brussels, Belgium, on Monday, Jan. 20, 2020. Pichai urged the U.S. and European Union to coordinate regulatory approaches on artificial intelligence, calling their alignment “critical.” Photographer: Geert Vanden Wijngaert/Bloomberg
Powered by automated translation

Alphabet and Google chief executive Sundar Pichai called for international cooperation for regulating artificial intelligence in an op-ed for the Financial Times on Monday.

Mr Pichai is the latest in a line of big tech leaders like Jeff Bezos, Jack Ma, Bill Gates and Elon Musk to weigh in on regulations amid rapid technological advancement that raise privacy concerns. Alphabet, the world's third biggest technology company by market capitalisation after Apple and Microsoft, topped $1 trillion in value for the first time last week and owns one of the most influential AI companies in the world.

"Now there is no question in my mind that artificial intelligence needs to be regulated. It is too important not to," Mr Pichai wrote. "The only question is how to approach it."

The EU and US are starting to develop regulatory proposals and cooperation is "critical to making global standards work" in tandem with the need for an "agreement on core values", he said. Yet the opposite is happening.

The EU is mulling banning facial recognition technology for three to five years as it works to figure out how to prevent abuses, according to Reuters. Meanwhile in the US, the Trump administration started the year by laying out an AI framework that would avoid "overreach". The executive order calls for federal agencies to undertake a "cost-benefit analysis" of a given AI solution prior to implementing laws.

Policymakers need to go beyond assessing AI based on commercial value and potential uses, Mr Pichai said.

"Companies such as ours cannot just build promising new technology and let market forces decide how it will be used," he wrote. "Government regulation will also play an important role."

He pointed to existing rules such as Europe’s General Data Protection Regulation as a starting point.

"Good regulatory frameworks will consider safety, explainability, fairness and accountability to ensure we develop the right tools in the right ways ... balancing potential harms, especially in high-risk areas, with social opportunities."

Some existing technologies, like heart rate monitors, can rely on existing laws, while emerging technologies like self-driving cars need brand-new regulation, he wrote.

Mr Pichai's op-ed comes just days after the New York Times published an investigative report that revealed contentious operations of facial recognition company Clearview AI, whose work it said "might end privacy as we know it".

Clearview AI relies on a database it built that retains more than three billion images of people scraped from Facebook, YouTube, Venmo and millions of other websites. The company has 600 law enforcement and at least several private sector clients, allowing them to identify individuals based on their face.

Google itself held back rolling out a similar technology in 2011 because of how it could be weaponised, the Times reported.