x

Abu Dhabi, UAETuesday 23 April 2019

AI-created photos: a threat or opportunity?

Photos created by artificial intelligence may easily be misused in the future, but we're not quite there yet

Samples generated by Generative Adversarial Networks at DeepMind, the artificial intelligence arm of Google. Andrew Brock
Samples generated by Generative Adversarial Networks at DeepMind, the artificial intelligence arm of Google. Andrew Brock

It’s easy to draw a picture of an imaginary person. We have been doing it since we were children, idly sketching random doodles of people who exist only in our minds. Computers, however, don’t find it quite so easy. Even if they’re trained with millions of images of human heads, their attempts to generate a ­realistic-looking face have always been characterised by bizarre asymmetry, surreal contexts and unusual protuberances. Last week, however, a group of researchers from computer graphics firm Nvidia unveiled the results of their most recent experiment: a ­series of ­photo-realistic pictures, created by machine, that could easily be mistaken for humans. The set of images posed a number of questions: what their inherent value might be, to what other uses the technology might be put, and whether machines that demonstrate such independence and ­creativity pose a wider existential threat.

Generative adversarial network

The lifelike results achieved by Nvidia were down to the use of what’s called a generative adversarial network, or Gan, which pits two computer networks against each other in a series of rapid-fire true-or-false tests. One network generates random images. The other analyses those images, compares them to a huge database of real ones, and tells the other computer how well it’s doing. Over time, the former network gets better at generating, the latter gets better at classifying. In Nvidia’s experiment, when the latter network deemed a computer-generated face to be as lifelike as any in its database, the job was done.

Created by machine, not man, ‘Edmond de Belamy’ sold for Dh1.5 million at auction AFP
Created by machine, not man, ‘Edmond de Belamy’ sold for Dh1.6 million at auction. AFP

The apparent artistic flair of Gans has been developing fast of late. Four years ago, its best attempts consisted of blurry black-and-white images. Today, they are able to design dental crowns, construct elaborate 3D environments for use in computer games, refine telescopic images of outer space and transform pictures of horses into pictures of zebras. Back in October, an artwork titled Edmond de Belamy, created by a Gan, sold for US$432,500 (Dh1.6 million) at auction – more than 40 times the estimate. But Mike Tyka, an artist from Seattle who works with artificial intelligence, is sceptical of the work’s artistic value. “To me, training an algorithm on a set of existing artworks [to make] more artworks like them – that’s the most boring use of this technology,” he says. “But that’s the choice of the ­operator. You can set the objective to whatever you want and determine the training data, all of which determines the outcome.”

'That's not intelligent'

Much of this research – including the recent Nvidia experiment – seems to be directed towards producing images that fool the human brain, perhaps to underline the idea of powerful artificial intelligence. Gans have been built to process photographs and output a version that looks remarkably like a Van Gogh or a Monet, but Peter Hall, professor of computer science at the University of Bath, does not believe that this equates to the Gan having the skill of a painter. “This technique can produce startlingly impressive results,” he says. “But Gans are just very large, complex, look-up tables. If they see ‘This’, they replace it with ‘That’. In effect, all they’re doing is ­taking a photograph and tracing over the top – and that’s not intelligent.”

Even the least accomplished drawings made by humans demonstrate more visual understanding than a Gan, ­according to Hall. “I’m ­fascinated by the idea that a signal can go into our brain through our eyes, come out through our hands in a different form, and still be recognisable as the same thing,” he says. “Children might draw their parents, and we can recognise them as their parents even though it’s little more than a scribble. That’s visual understanding.” Tyka agrees: “A child’s drawing is still a much more abstractive transformation of experience than anything a modern machine learning network can do,” he says. “But equally, our own creativity is based on our own experiences – our ‘training data’ – and the vast majority, if not all, of human creative work is somewhat derivative.”

DeepMind: Google's artificial intelligence wing

While artists such as Tyka are using Gans to explore new forms of expression, the big tech firms are looking for practical, revenue-­generating uses. In recent months, Google’s parent company, Alphabet, has also pursued the creation of authentic-looking images. In October, its artificial intelligence wing, DeepMind, revealed Gan-generated pictures of a dog, a landscape, a butterfly and a cheeseburger, all of which look remarkably close to the real thing. This ability to accurately approximate imagery has potential in the graphics industry, according to Hall. “Some companies will want machines to do routine design tasks such as cleaning up noise, adding a bit of hair, changing the reflection in the car wheel and so on,” he says. “But in the longer term, how this technology is deployed depends on what bosses want, what the economy’s doing and what politicians do.”

____________________

Read more:

2018 in review: hopes for Hyperloop the only good news for tech

Google Assistant will now tell you if your flight is delayed

Smartphones of 2019 will pack in more tech capabilities

____________________

The lines between truth and fiction are blurred

The way such images blur the lines between truth and fiction is of great concern in an era characterised by misinformation and propaganda. The immediate worry when one looks at Nvidia’s gallery might be the possibility of the pictures being used as fake profile photos for bogus accounts on social media, but there’s the broader issue of belief systems, and whether we may start wondering whether any image we see on a screen has been generated by a computer. A report released earlier this month by AI Now, an institute studying the social implications of artificial intelligence, warned specifically of the unforeseen hazards that emerge while researchers work towards benign goals, and made a number of urgent recommendations for overseeing the burgeoning industry.

In the immediate future, we won’t see a mass of fake imagery created by computers. Enormous amounts of power are needed to create one small image one estimate put it as equivalent to the amount used by an average American household in six months. “These are very large and complex machines with millions of numbers involved,” Hall says. “And they are of much higher complexity than the phenomena they’re modelling.”

In many ways, it’s like using a sledgehammer to crack a nut – but, crucially, it can now crack that nut. “Gans that generate a realistic image could be said to have ‘won’ in a limited sense,” Hall says, “but they don’t understand the world in terms of its visual components.” We are cleverer than the computers we programme – for the time being, at least.

Updated: December 27, 2018 03:31 PM

SHARE

SHARE

Editors Picks
Most Read