Beyond cyberchondria: Why we should probably stop using Siri and Alexa for health advice

We often turn to our virtual assistants when we have a medical concern, but a new study suggests we probably shouldn’t

Antalya, Turkey - January 26, 2012:  Siri app of the Apple iPhone 4S. Siri is an application introduced on the new iPhone 4s.It finds answers and solutions for the requests of the user by spoken language.
Powered by automated translation

Unusual health symptoms make us anxious. Ideally, our fears would be addressed and we'd be given reassurances or remedies as quickly as possible. Given the wealth of information the internet provides, it's hardly surprising that we ­instinctively turn to it in search of answers.

Even if we know, deep down, that a Google search doesn't replace a professional diagnosis, we do it anyway – 80 per cent of us, in fact, according to a study by the Pew Research Centre. Google's data reveals 7 per cent of all searches are related to health. That works out at more than 1,000 anxious medical queries every second. There's even a name for it: cyberchondria. With dozens of worries playing on our minds and no immediate answers, we lean on technology to fill the information vacuum.

We've even started to turn to smart speakers and digital assistants. Forums on Reddit show the rather inconsistent experiences of people who have used their devices for, say, reassuring ageing parents.

The data is no different to an online search – but do we inherently trust it more because the human voice has more credibility?

So a team of researchers at the University of Alberta tested the four biggest virtual assistants – Google Assistant, Amazon's Alexa, Apple's Siri and Microsoft's Cortana – to establish whether they were up to the job.

They asked 123 questions across 39 first aid topics and found the results to be "highly variable". Alexa and Google Assistant had "low to moderate congruence with guidelines". Cortana and Siri's responses were so poor it "prohibited their analysis". The subtext of the study? To treat advice obtained via technological means with caution.

It's a worry that echoes throughout the medical profession. "While Google certainly has a vast quantity of information, it lacks discernment," a nurse wrote on the American website Healthline. "While it's pretty easy to find lists that sound like our symptoms, we don't have the medical training to understand the other factors that go into making a medical diagnosis, like personal and family history. And neither does Dr Google."

Essentially, we plough through information and guess what might be applicable. Then, depending on our personalities, we either focus on the ominous stuff or reassure ourselves with the benign stuff. Smart speakers and virtual assistants trawl through exactly the same information, but algorithms tell them what's likely to be reliable. "By and large, their sources of information were credible," says Christopher Picard, a co-­author of the University of Alberta study. "Mayo Clinic came up a few times, as did WebMD, which are better than most."

FILE - In this May 17, 2017 file photo, an Amazon Alexa device is switched on for a demonstration in Seattle. Two universities in the Northeast have received more than $1 million to study whether voice-assistant devices like Amazon's Alexa could help detect early signs of memory problems. (AP Photo/Elaine Thompson, File)
Amazon's Alexa can't necessarily be trusted for health advice. AP Photo.

But can we trust the decisions our digital assistants make on our behalf? Google recognised years ago that the sheer quantity of health searches gave them a moral and social responsibility to improve the results. In 2015, it began using its Knowledge Graph to pull out authoritative information for about 900 complaints, and presenting it in a box at the top of the results page.

In 2018, it expanded this feature to allow users to search for particular symptoms – for example, "my stomach hurts" – and directed you to information approved by doctors. Last year, it appointed a head of Google Health, David Feinberg, who was partly tasked with improving the company's diagnostic skills.

Smart speakers are changing the way people consume information ... they're trusted more than a computer screen

But eyeballs also equal revenue and the public's huge curiosity about their own health is prompting many platforms and services to up their game and possibly become the go-to health consultant. Amazon began adding health skills to Alexa in 2016, offering advice about children's symptoms, such as fevers, coughs and headaches. While many of us would baulk at seeking advice from a talking box, Picard says these devices will play an ever-greater role in our lives.

"Smart speakers are changing the way people consume their information and it's interesting to unpack the idea that they're trusted to a higher degree than a computer screen," he says. "The data is no different to an online search – but do we inherently trust it more because the human voice has more credibility?"

Yet our eagerness to place our trust in unreliable information lies at the source of the problem. We're happy to believe all kinds of things in the hope they'll make us better, and in some cases the outcomes have been tragic. In 2016, Chinese search engine Baidu came under fire for providing links to esoteric treatments for cancer, which were later blamed for contributing to the death of a student.

Social media, whose algorithms lean more towards engaging us than informing us, can also dangerously skew our understanding of what constitutes good medical treatment. At the end of last year, Facebook was forced to remove advertisements propagating the idea that an effective HIV drug, Truvada, had harmful side effects and 50 major health organisations signed a letter complaining the advertisements were "factually inaccurate". This month, the Huffington Post revealed vaccine-related misinformation was still prevalent on Instagram, despite a year-old pledge from the Facebook-owned service that it would take measures to suppress it.

Concern is so great because this material is like catnip to anyone worrying about their health. A lack of discernment from the public means all health information becomes equal. That’s a problem.

If we were able to educate ourselves, health searches would stop being dangerous and start to become beneficial. The Pew study found that people who take steps to properly learn about their health online are more likely to obtain better treatment. A 2018 study in The Medical Journal of Australia found that online searches could have a "positive impact on the doctor-­patient interaction".

Meanwhile, the technology surrounding diagnosis is improving. Feinberg has promised to work closer with Google Search and YouTube to downrank manifestly false information. AI systems are forever learning about combinations of symptoms and what they might mean. And as we use smart speakers and chatbots more often, we will become happier to delegate choices to them. Even health care.

"It's obviously not there yet," Picard says. "But it won't take a lot to get smart speakers giving meaningful advice. It may not be able to do the big things, but meaningful hands-free information on the little things? That would be a really, really useful tool."