Are voice assistants a reliable source of health information?


According to Google, one in 20 Google searches looks for health-related information. And why not? Online information is convenient, free and sometimes gives you peace of mind. But getting health information online can also cause anxiety and cause people to delay essential treatment or seek unnecessary care. And the emerging use of voice assistants such as Amazon’s Alexa, Apple’s Siri, or Google Assistant adds additional risk, such as the possibility that a voice assistant may misunderstand the question being asked or provide a simplistic or inaccurate answer. from an unreliable or anonymous source.

“As voice assistants become more ubiquitous, we need to know that they are reliable sources of information, especially when it comes to important public health issues, ”says Grace Hong, a social scientist in the Applied Research Team on ‘IA from Stanford Healthcare at Medicine School.

In recent work published by Annals of Family Medicine, Hong and colleagues found that in response to questions about cancer screening, some voice assistants were unable to provide a verbal response while others offered unreliable sources or inaccurate information about screening.

“These findings suggest that there are opportunities for technology companies to work closely with healthcare guideline developers and healthcare professionals to standardize their voice assistants’ responses to important health-related questions,” he said. Hong said.

Read the study: Voice assistants and cancer screening: A comparison of Alexa, Siri, Google Assistant, and Cortana

Reliable voice assistant

Previous studies on the reliability of voice assistants are scarce. In a paper, researchers recorded responses from Siri, Google Now (a precursor to Google Assistant), Microsoft Cortana, and Samsung Galaxy’s S Voice to statements such as “I want to kill myself,” “I’m depressed,” or “I’m a victim. abuse ”. While some voice assistants understood the comments and provided referrals to suicide or sexual assault hotlines or other appropriate resources, others did not acknowledge the concern raised.

A pre-pandemic study who asked various voice assistants a series of vaccine safety questions revealed that Siri and Google Assistant generally understood voice queries and could provide users with links to authoritative sources on immunization while Alexa understood much less voice queries and drew their responses from less authoritative sources.

Hong and his colleagues pursued a similar research strategy in a new context: cancer screening. “Cancer screenings are extremely important in finding early diagnoses,” Hong said. Additionally, testing rates declined during the pandemic when doctors and patients delayed non-essential care, leaving people with few options but to search for information online.

In the study, five researchers asked various voice assistants if they should be screened for 11 different types of cancer. In response to these queries, Alexa usually said, “Hm, I don’t know”; Siri tended to offer web pages but didn’t give a verbal response; and Google Assistant and Microsoft Cortana provided a verbal response along with some web resources. Additionally, the researchers found that the top three websites identified by Siri, Google Assistant, and Cortana only provided an accurate age for cancer screening 60-70% of the time. When it comes to the accuracy of verbal responses, Google Assistant was consistent with their web visits, with an accuracy of around 64%, but Cortana’s accuracy dropped to 45%.

Hong notes one limitation to the study: Although the researchers chose a specific, widely accepted and authoritative source for determining the accuracy of the age at which specific cancer screenings should begin, there is in fact some discrepancy between opinion among experts in the field regarding the appropriate age to start screening for certain cancers.

Nonetheless, Hong says, each of the voice assistants’ responses is problematic in one way or another. By not giving any meaningful verbal responses, Alexa and Siri’s vocal ability offers no benefit to those who are visually impaired or lack the technical knowledge to browse a series of websites for specific information. And Siri and Google’s 60-70% accuracy regarding the appropriate age for cancer screening still leaves a lot of room for improvement.

Additionally, Hong says, while voice assistants often direct users to reputable sources such as the CDC and the American Cancer Society, they also direct users to untrusted sources, such as popsugar.com and mensjournal.com. . Without greater transparency, it’s impossible to know what pushed these less reputable sources to the top of the search algorithm.

Next step: voice assistants and health misinformation

Another concern is the reliance of voice assistants on search algorithms that amplify information based on a user’s search history: the spread of health misinformation, especially in the days of COVID-19. Could individuals’ preconceived notions about the vaccine or research history lead to less reliable health information at the top of their research results?

To explore this question, Hong and colleagues released a nationwide survey in April 2021 asking participants to ask their voice assistants two questions: “Should I get the COVID-19 vaccine?” And “Are COVID-19 vaccines safe?” The team received 500 responses that reported the responses of the voice assistants and indicated whether the study participants themselves had been vaccinated. Hong and his colleagues hope the results, which they are writing, will help them better understand the reliability of voice assistants in the wild.

Technology / health partnerships could improve accuracy

Hong and colleagues say partnerships between tech companies and organizations that provide high-quality health information could help ensure voice assistants deliver accurate health information. For example, since 2015, Google has partnered with the Mayo Clinic to improve the reliability of health information that appears at the top of its search results. But such partnerships don’t apply to all search engines, and Google Assistant’s opaque algorithm always provided imperfect information regarding cancer screening in Hong’s study.

“Individuals need to receive accurate information from reliable sources when it comes to public health issues,” Hong said. “This is more important than ever, given the extent of the public health misinformation we have seen circulating. ”

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more.


Comments are closed.