What are deepfakes and why should we be concerned?

Deepfake technology could be widely used for identity theft

Abu Dhabi, United Arab Emirates - October 16, 2019: The launch of Mohamed bin Zayed University of Artificial intelligence. Wednesday the 16th of October 2019. Masdar City, Abu Dhabi. Chris Whiteoak / The National
Powered by automated translation

So-called "deepfake" material is created by digitally manipulating original video footage or audio recordings.

The technology required to do the work is widely available and can be downloaded from the internet.

Its apparently increasing use, however, has caused much public controversy globally.

China has made the production of fake content without an accompanying consumer warning a criminal offence, while in California it is illegal to produce or share doctored material of politicians within 60 days of an election.

Society is really not ready for this and it might be the case it has to become a real problem before that happens

Speaking to The National, UAE cyber security experts warned the public to be on their guard against falling victim to the technology.

They said increasingly sophisticated techniques were being employed to carry out identity theft, with fraudsters manipulating recordings of an individual's speech to fool voice recognition software.

"All cyber criminals need is a voice clip of someone and a few images to create a profile and put their fingers in the puppet, so to speak," said Matt Walmsley, director for the Europe Middle East and Africa region at technology company Vectra.

“The technology can already be extremely convincing and it’s only going to create more problems as the decade continues.

“Society is really not ready for this and it might be the case it has to become a real problem before that happens.”

Matt Walmsley, director for the EMEA region of tech company Vectra, has warned people need to be aware of the dangers of deepfake videos.
Matt Walmsley, director for the EMEA region of tech company Vectra, has warned people need to be aware of the dangers of deepfake videos.

The use of deepfakes came to prominence last year when fake video messages from high-profile figures such as Donald Trump, Barack Obama and Facebook chief executive Mark Zuckerberg were widely circulated online. The sophistication of the digital editing software can be so effective that distinguishing genuine material from doctored content can prove extremely hard.

Mr Walmsley said international banks were always exploring new ways to bolster their security measures to protect customers from fraud. But he said that if they were to adopt voice or facial recognition software as part of their customer screening processes, they would have to ensure the system was protected from deepfake hackers.

"Banks are going to be more advanced with security in the coming years and are considering the use of voice and facial recognition software," Mr Walmsley said. "However, criminals are going to use more advanced methods as well, so people need to be aware of the risks involved with their own personal data.

“Criminals might use the technology to build a profile of someone and slowly ascertain their information and eventually get hold of their passwords.

“People need to be more mindful. They will often take the easy option of trading their privacy for a simple life and you see this when someone downloads a new app and thinks nothing of agreeing to the terms and conditions without reading them properly.”

Dr Paul Hopkinson, associate head of Edinburgh Business School at Heriot-Watt University Dubai, also said artificial intelligence would probably have a significant impact on privacy in the decade ahead.

“Privacy could be compromised,” he said. “AI can track and analyse an individual’s every move online and physically. Facial recognition algorithms can be used to find out who you are.”