Google removes some AI summaries after users’ health put at risk

Google removes some AI summaries after users’ health put at risk

Some of the AI summaries, which appear at the top of search results, provided inaccurate health information, putting users at risk of harm. File picture

Google has removed some of its artificial intelligence health summaries after an investigation found people were being put at risk of harm by false and misleading information.

The company has said its AI Overviews, which use generative AI to provide snapshots of essential information about a topic or question, are “helpful” and “reliable”.

However, some of the summaries, which appear at the top of search results, provided inaccurate health information, putting users at risk of harm.

In one case that experts described as “dangerous” and “alarming”, Google provided bogus information about crucial liver function tests that could leave people with serious liver disease wrongly thinking they were healthy.

Typing “what is the normal range for liver blood tests” served up masses of numbers, little context and no accounting for nationality, sex, ethnicity, or age of patients, the Guardian found.

What Google’s AI Overviews said was normal may vary drastically from what was actually considered normal, experts said.

The summaries could lead to seriously ill patients wrongly thinking they had a normal test result, and not bother to attend follow-up healthcare meetings.

After the Guardian investigation, the company has removed AI Overviews for the search terms “what is the normal range for liver blood tests” and “what is the normal range for liver function tests”.

A Google spokesperson said: “We do not comment on individual removals within Search. In cases where AI Overviews miss some context, we work to make broad improvements, and we also take action under our policies where appropriate.”

The Guardian found that typing slight variations of the original queries into Google, such as “lft reference range” or “lft test reference range”, prompted AI Overviews. 

Vanessa Hebditch, of the British Liver Trust, a liver health charity, said: “A liver function test or LFT is a collection of different blood tests. Understanding the results and what to do next is complex and involves a lot more than comparing a set of numbers.

But the AI Overviews present a list of tests in bold, making it very easy for readers to miss that these numbers might not even be the right ones for their test.

“In addition, the AI Overviews fail to warn that someone can get normal results for these tests when they have serious liver disease and need further medical care. This false reassurance could be very harmful."

Google, which has a 91% share of the global search engine market, said it was reviewing the new examples provided to it by the Guardian.

AI Overviews still pop up for other examples that the Guardian originally highlighted to Google. 

They include summaries of information about cancer and mental health that experts described as “completely wrong” and “really dangerous”.

Asked why these AI Overviews had not also been removed, Google said they linked to well-known and reputable sources, and informed people when it was important to seek out expert advice.

- The Guardian

x

More in this section

Lunchtime News

Newsletter

Keep up with stories of the day with our lunchtime news wrap and important breaking news alerts.

Cookie Policy Privacy Policy Brand Safety FAQ Help Contact Us Terms and Conditions

© Examiner Echo Group Limited