UK warns of tech majors' control of AI

Regulators should be mindful that large datasets on which artificial intelligence depends on are not monopolised by a handful of large technology companies

A Nippon Telegraph and Telephone East Corp. Robo Connect communication robot demonstrates at the Artificial Intelligence Exhibition & Conference in Tokyo, Japan, on Wednesday, April 4, 2018. The AI Expo will run through April 6. Photographer: Kiyoshi Ota/Bloomberg
Powered by automated translation

An influential body of UK policymakers said regulators should stop major tech companies from dominating the field of artificial intelligence, and also warned on the potential of widespread unemployment due the technology.

British anti-trust regulators should be mindful that large datasets on which artificial intelligence depends on are not monopolised by a handful of large technology companies, such as Alphabet, IBM and Microsoft, the House of Lords Select Committee on Artificial Intelligence said in a report published Monday.

"Large companies which have control over vast quantities of data must be prevented from becoming overly powerful within this landscape," the report, which followed a nine-month inquiry into all aspects of AI development in the UK, said. The committee received 223 pieces of written evidence and interviewed 57 witnesses during the course of the investigation.

However, it stopped short in recommending the creation of an overarching new ministry to serve as a watchdog on the emerging technology.

"We don’t see the need for an overarching regulator," said Timothy Clement-Jones, the chairman of the committee. But he said that the Financial Conduct Authority, for instance, should be aware of how insurance companies are using machine-learning algorithms to help determine someone’s premiums or how banks are using such technology to determine whether to extend credit.

The policymakers also urged government to be vigilant about the potential for widespread job losses due to the adoption of AI across the economy, but stopped short of endorsing any radical policy solutions, such as a universal basic income, that some have advocated.

"We believe that AI will disrupt a wide range of jobs over the coming decades, and both blue- and white-collar jobs which exist today will be put at risk," it said.

Instead, the committee said that the government must invest more heavily in adult retraining programs and called on industry to match government funding for these programmes.

"The UK is a world leader in AI and has many opportunities available to it, but it won’t be able to take advantage of those opportunities unless we mitigate some of the risks involved," Mr Clement-Jones said.

_______________

Read more:

Alibaba puts $600m into most valuable AI start-up

Regulation not dissection is what Big Tech needs

_______________

The government also needs to do more to ensure the UK continues to have enough people with machine-learning skills, the committee said. This was especially important as Britain prepares to leave the European Union, since many British workers with AI skills are currently drawn from abroad.1 Visas

It urged the government to further increase the number of Tier 1 visas for exceptionally talented individuals available each year. AI researchers with PhDs can avail themselves of these visas.

The government has said it is doubling the number of Tier 1 visas available each year to 2,000, but the committee said the government should increase it again. It also said that machine-learning and artificial intelligence roles should be added to the critical skills shortage list that qualifies people for Tier 2 visas.

"We have got a skill shortage currently and we rely quite heavily on bringing those skills in from outside and so the new visa regime must really be fit for purpose," Mr Clement-Jones said.

The report said the UK Ministry of Defence should change its definition of autonomous weapons systems to bring the country more in line with others. Currently, the British military defines such weapons as those "capable of understanding higher-level intent and direction", a high bar that means very few weapons currently on the market meet the standard. By contrast, other countries define such systems as those able to select targets on their own, without human intervention.

The United Nations is currently discussing whether limitations should be placed on the use of what are called "lethal autonomous weapons systems". A number of prominent figures in the development of artificial intelligence, including billionaire Elon Musk and Mustafa Suleyman, a co-founder of DeepMind, the artificial intelligence company owned by Alphabet, have signed a petition calling for an outright ban on such weapons.