New rules stress human oversight and consumer protection regarding checking borrowers’ creditworthiness and automated claims decisions
Humans start to put financial robots in their place
Regulators are beginning to teach robots who’s the boss.
After spending billions of dollars on cutting-edge artificial intelligence technologies, Europe’s banks and insurers face tougher scrutiny of the tools they use to help root out fraud, check borrowers’ creditworthiness and automate claims decisions. European Union General Data Protection Regulation (GDPR) rules starting this week will stress human oversight and consumer protection, which may hamper companies trying to build the tools of the future.
“Companies developing AI technologies will have to consider and embed the data protection issues into the design process,” said David Martin, senior legal officer at Brussels-based consumer advocate Beuc. “It’s not something where they can just tick a box at the end.”
The rules could present an obstacle to coders looking to design ever more sophisticated algorithms. That may handicap EU firms that are competing with rivals in US and Asia to develop new technologies, according to Nick Wallace, a Brussels-based senior policy analyst at the Centre for Data Innovation, a nonpartisan and nonprofit research institute.
“For an algorithmic model to be transparent to a human, even a human with a fairly good understanding of algorithms, it needs to be kept within a certain level of complexity,” Mr Wallace said. “The more abstractions you have, let alone the more data points, the harder it’s going to be for any human being to sit down, read through all of it and scrutinise the decision.”
Regulators worldwide are trying to catch up with the financial industry’s rush to automate everything from trading desks to lending decisions and customer help-desks. The banking industry will invest $3.3 billion in AI and related technologies this year, making it the second-biggest spender after retail, research firm International Data Corporation (IDC) estimates. Overall spending on the technologies will grow to $52.2bn by 2021 from about $19bn this year, according to IDC.
The GDPR being introduced on May 25 will generally require firms to get consent from people when their personal data is used to fully automate certain types of decisions that have significant effects, such as whether to award a loan. Clients will have the right to demand a firm’s human employee intervene and review a decision, and they will have the power to get details about an automated process to help guard against discriminatory practices.
“Major corporations recognise that this is a challenge and that privacy rights and data protection rights need to be given full consideration during the design and development of any kind of product or service,” said John Bowman, London-based senior principal at IBM’s Promontory Financial Group subsidiary.
As policymakers ironed out the details of the regulations over the past year, financial industry lobbies including the Association for Financial Markets in Europe and UK Finance pressed authorities to tread softly and to acknowledge ways the technologies can benefit consumers. In a 24-page letter to policymakers, the European Banking Federation said “profiling activities should not necessarily be perceived as having a negative impact on customers.”
The law is being closely watched by the insurance industry, where four out of five executives say that AI systems will be used alongside human staffers within the next two years, consultant Accenture said in a report this year.
The UK arm of Ageas, a Brussels-based insurer, is looking to speed up the handling of thousands of claims for car insurance by using AI software to review images of vehicle damage and help estimate a repair job. GDPR won’t affect the current technology, and the insurer has included the law’s requirements in its processes, an Ageas spokeswoman said.
Allianz, Europe’s biggest insurer, uses data and machine-learning technologies in several areas of its insurance business. That includes automating what was once a paper-based and manual underwriting process for small- and medium-sized businesses.
Automatic decision-making is typically based either on consent or as a condition for entering a contract, according to Philipp Raether, chief privacy officer at Munich-based Allianz Group.
“In scenarios where profiling is necessary for entering into a contract, this will be made transparent to the customer in an understandable way,” Mr Raether said.