Google puts users at risk by downplaying health disclaimers under AI Overviews
Google did not deny its disclaimers fail to appear when users are first served medical advice, or that they appear below AI Overviews and in a smaller, lighter font.
Google is putting people at risk of harm by downplaying safety warnings that its AI-generated medical advice may be wrong.
When answering queries about sensitive topics such as health, the company says its AI Overviews, which appear above search results, prompt users to seek professional help, rather than relying solely on its summaries. âAI Overviews will inform people when itâs important to seek out expert advice or to verify the information presented,â Google has said.
However, the company does not include any such disclaimers when users are first presented with medical advice.
Google only issues a warning if users choose to request additional health information and click on a button called 'Show more'. Even then, safety labels only appear below all of the extra medical advice assembled using generative AI, and in a smaller, lighter font.
âThis is for informational purposes only,â the disclaimer tells users, who click through for further details after seeing the initial summary, and navigate their way to the very end of the AI Overview. âFor medical advice or a diagnosis, consult a professional. AI responses may include mistakes.âÂ
Google did not deny its disclaimers fail to appear when users are first served medical advice, or that they appear below AI Overviews and in a smaller, lighter font. AI Overviews âencourage people to seek professional medical adviceâ, and frequently mention seeking medical attention within the summary itself âwhen appropriateâ, a spokesperson said.
AI experts and patient advocates said they were concerned. Disclaimers serve a vital purpose, they said, and should appear prominently when users are first provided with medical advice.
âThe absence of disclaimers when users are initially served medical information creates several critical dangers,â said Pat Pataranutaporn, an assistant professor, technologist, and researcher at the Massachusetts Institute of Technology (MIT) and a world-renowned expert in AI and human-computer interaction.
âSecond, the issue isnât just about AI limitations â itâs about the human side of the equation. Users may not provide all necessary context or may ask the wrong questions by misobserving their symptoms.
âDisclaimers serve as a crucial intervention point. They disrupt this automatic trust and prompt users to engage more critically with the information they receive.âÂ
Gina Neff, a professor of responsible AI at Queen Mary University of London, said the âproblem with bad AI Overviews is by designâ and Google was to blame. âAI Overviews are designed for speed, not accuracy, and that leads to mistakes in health information, which can be dangerous.â
Neff said the investigationâs findings showed why prominent disclaimers were essential. âGoogle makes people click through before they find any disclaimer,â she said. âPeople reading quickly may think the information they get from AI Overviews is better than what it is, but we know it can make serious mistakes.âÂ
Google has since removed AI Overviews for some but not all medical searches.
Sonali Sharma, a researcher at Stanford Universityâs centre for AI in medicine and imaging, said: âThe major issue is that these Google AI Overviews appear at the very top of the search page and often provide what feels like a complete answer to a userâs question at a time where they are trying to access information and get an answer as quickly as possible.
"For many people, because that single summary is there immediately, it basically creates a sense of reassurance that discourages further searching, or scrolling through the full summary and clicking âShow moreâ where a disclaimer might appear.
.
A Google spokesperson said: âItâs inaccurate to suggest that AI Overviews donât encourage people to seek professional medical advice. In addition to a clear disclaimer, AI Overviews frequently mention seeking medical attention directly within the overview itself, when appropriate.âÂ
Tom Bishop, head of patient information at Anthony Nolan, a blood cancer charity, called for urgent action. âWe know misinformation is a real problem, but when it comes to health misinformation, itâs potentially really dangerous,â said Bishop.
âThat disclaimer needs to be much more prominent, just to make people step back and think ⊠âIs this something I need to check with my medical team rather than acting upon it? Can I take this at face value or do I really need to look into it in more detail and see how this information relates to my own specific medical situation?â Because thatâs the key here.âÂ
He added: âIâd like this disclaimer to be right at the top. Iâd like it to be the first thing you see. And ideally it would be the same size font as everything else youâre seeing there, not something thatâs small and easy to miss.â



