The story of Google’s “What People Suggest” feature is a case study in the risks and responsibilities that come with deploying AI in the health search context. The tool, which organized amateur health advice from internet communities using AI, has been discontinued. Three insiders confirmed the removal, and Google’s acknowledgment of the fact came with an explanation that failed to withstand scrutiny.
Introduced at Google’s “The Check Up” health event in New York by then-chief health officer Karen DeSalvo, the feature was positioned as a meaningful innovation in health search. DeSalvo wrote that users genuinely want to hear from people in similar situations, not just from medical professionals, and the feature was built to address that. The AI curated health community discussions into organized themes for easy user navigation.
Google stated the removal was due to search interface simplification and denied safety concerns played any role. When challenged to demonstrate public communication of the change, the company pointed to a blog post that made no mention of the discontinued feature. One insider put it plainly: “It’s dead.”
The lessons of this episode are reinforced by an investigation that found Google’s AI Overviews were spreading false health information to two billion users monthly. Google removed some medical AI Overviews in response but stopped short of the systemic reforms that health professionals and patient advocates have called for.
As Google’s next health event draws near, the company has an opportunity to turn a page on a difficult period for its health AI products. Doing so will require genuine accountability — the willingness to publicly acknowledge what has not worked and to commit to standards of safety and transparency that match the significance of the domain. The end of “What People Suggest” is both a warning and an invitation. The question is whether Google will answer it.