Driverless cars: have the wheels come off autonomous technology?

Experts agree high-tech sensors essential for safety can still be fooled

Powered by automated translation

It’s taken some risible predictions, failed demos and even a few fatalities, but reality is finally catching up with the hype about self-driving cars.

Last week Jim Hackett, chief executive of the Ford Motor Company, admitted that his industry had under-estimated the challenge of creating autonomous vehicles fit for the open road.

While Hackett insisted Ford will still launch its first driverless car in 2021, it will fall well short of the sci-fi dream. According to Bloomberg it will be restricted to specific duties and be blocked from venturing outside a limited area.

Hackett’s downbeat view is becoming increasingly common among once gung-ho cheerleaders of the technology.

Back in 2012, Google founder Sergey Brin predicted the roll-out of self-driving cars for the public within five years. That deadline came and went.

When Google’s vehicle spin-out Waymo finally launched a “driverless” taxi service late last year, it was restricted to certain roads in Phoenix, Arizona – and had a human safety-driver.

Waymo’s chief executive John Krafcik is on record admitting that the go-anywhere-anytime driverless car may never exist: “Autonomy will always have some constraints”, he says.

The standard excuse for the less than light-speed progress towards autonomy is that sensor technology isn’t good enough – yet.

To cope with the challenges every human driver takes for granted, driverless cars have to be fitted with an array of cameras, radar and lidar, a kind of light-based radar.

But no matter how effective these are, they’re useless without computer algorithms that control the vehicle’s behaviour.

Many of these exploit artificial intelligence (AI) research which allows computers to learn how to react to sensor data based on countless hours of supervised driving.

The techniques have been around for decades and have proved effective in a host of applications from face recognition to diagnosing disease.

But they’ve also been responsible for some notorious blunders that seem hilarious until one ponders the implications.

A famous Chinese businesswoman recently found herself accused of illegal jay-walking in the city of Ningbo, Shanghai, by a facial recognition system.

A driverless car (L) is seen on a trial test on the road in Singapore on August 29, 2016. - A US compnny which launched the world's first public trials of driverless cars in Singapore plans to expand the tests in three other Asian countries, a company executive said August 29. (Photo by ROSLAN RAHMAN / AFP)
A driverless car seen during testing in Singapore. AFP

It later emerged she was nowhere near the scene – and that the AI had been fooled by an advertising poster featuring her face on the side of a bus.

The police apologised and said the AI would be “upgraded” to eliminate such errors.

Yet the idea that practice makes perfect with AI is being undermined by ongoing experiments by computer scientists.

In one such study by researchers at Google, a state-of-the-art AI was trained to recognise bananas, snails, slugs and other similar-looking objects by studying thousands of images in all kinds of lighting and environments.

In tests, the AI performed perfectly well – until confronted with an image of a banana lying next to an odd-looking sticker. Suddenly the AI decided it was looking at a toaster.

That was no accident, however. The Google researchers had created the sticker specifically to fool the AI.

To do it, they exploited the fact that AIs are trained using scenes with just one dominant object which the computer must then classify.

The researchers found a way to create weird-looking stickers that distract the AI and lead to wildly wrong identifications – like believing a banana is a toaster.

Known as an adversarial attack, the sticker trick has the power to fool any current AI – even the most sophisticated “deep learning” systems which link together multiple AIs to boost their powers.

The implications for driverless vehicles are scary. What’s to stop hackers armed with a printer from creating stickers that fool the AIs at the wheel?

 

Nothing. In fact, it’s already been done.

Computer security expert Professor Dawn Song of the University of California, Berkeley, has shown that a few small stickers on a “STOP” sign can trick an AI system to think it’s just stating the road speed limit.

Last month researchers at the Tencent Keen Security Laboratory, China, showed that a few tiny squares placed at a road intersection can fool the autopilot of a Tesla Model S into driving into oncoming traffic.

The company responded by telling Forbes that the demonstration was “not a realistic concern” because a human driver could override the autopilot at any time “and should always be prepared to do so”.

That’s hard to square with Tesla founder Elon Musk’s statement in February that the company is on the brink of unveiling a truly hands-off self-driving car.

In a podcast interview with Tesla backer ARK Invest, Musk declared the car “will be able to find you in a parking lot, pick you up, take you all the way to your destination without an intervention”.

Musk said he would guess that technology would be good enough for the driver to fall asleep and wake up at their destination by sometime next year.

He believes regulators rather than technology are being the biggest barrier to his vision becoming a reality.

But the real barrier is likely to be a lack of customers.

Despite the grand pronouncements and billions of dollars of investment by leading car-makers, public mistrust of self-driving technology is actually increasing.

A recent survey by the American Automobile Association found that 71 per cent of the public would not take a ride in such a vehicle – up from 63 per cent in 2017.

Advocates of the technology have been keen to dismiss such fears by pointing out that more than 90 per cent of road accidents are due to human error. The implication is that vehicles without humans at the wheel will be far safer.

But as evidence mounts of the vulnerability of AI to even minor distractions - both innocent and malicious - the scepticism of the public seems more justified than the visions of tech gurus.

Robert Matthews is Visiting Professor of Science at Aston University, Birmingham, UK