An examination of Google’s “What People Suggest” feature — from launch to removal — reveals a pattern of enthusiasm followed by evasion that has come to define the company’s handling of health AI products. The feature used AI to organize community health content from internet discussions and display it in search results. Three insiders confirmed its removal, and Google acknowledged it after media inquiries drew attention to the change.
The feature was introduced at “The Check Up,” Google’s health event in New York, by then-chief health officer Karen DeSalvo, who wrote that the tool reflected users’ genuine desire for peer health insights. The AI curated health forum discussions into thematic summaries, linking users to the original content. The initial audience was mobile users in the United States.
Google denied that safety was a factor in the removal, citing search page simplification instead. When the company was asked for proof of a public announcement, the referenced blog post contained no mention of the discontinued feature. The inconsistency has drawn sharp criticism from health AI observers.
The wider context includes an investigation that revealed AI Overviews in Google Search were distributing false medical information to approximately two billion monthly users. Google made limited adjustments in response, including removing some medical AI Overviews, but the issue of AI health accuracy on the platform remains live.
At its upcoming health event, Google will attempt to make a positive case for its role in AI-driven healthcare. Whether the audience, including health professionals, regulators, and the public, finds that case convincing will depend heavily on whether the company has genuinely internalized the lessons of “What People Suggest.” So far, the evidence suggests more work remains to be done.
Photo credit: www.freepik.com

