Google AI Overviews cite YouTube more than any medical site for health queries
Study, framing, and methodology
- Several commenters see the Guardian headline as misleading “clickbait.”
- YouTube is a hosting platform; grouping all “youtube.com” citations together ignores whether the actual publisher is a hospital, clinic, or individual influencer.
- The underlying study is by an SEO company, focuses on domains rather than content quality, and uses German-language queries, which may skew which reputable English sources appear.
- When aggregating multiple medical sites together, commenters suspect those likely exceed YouTube’s share.
Self‑preferencing and incentives
- Many say it is unsurprising that Google products (AI Overviews) amplify another Google product (YouTube); neutrality was never realistic.
- Some view this as a straightforward conflict of interest and an antitrust signal: citations and UI are being steered toward what makes Google more money (video, ads, engagement).
YouTube as a medical source
- Defenders note that many reputable institutions and physicians publish on YouTube; video can be an excellent teaching medium, especially for procedures.
- Critics counter that ordinary users cannot easily distinguish expert channels from quacks, conspiracists, and “miracle cure” peddlers, and that video is especially persuasive even when wrong.
- There’s concern about social-media-driven self‑diagnosis (e.g., ADHD/autism, alternative treatments) and medical influencers explicitly positioning themselves against mainstream doctors.
Quality of AI Overviews / Gemini
- Repeated reports of AI Overviews being confidently wrong, fabricating capabilities (“how to” answers for things you simply can’t do), and never saying “I don’t know.”
- Some say Gemini/Overviews use cheaper, weaker models to keep costs down at Google scale.
- A few users report good experiences (e.g., surprisingly accurate cancer‑progression expectations from labs), but this is framed as doctors being reluctant to give concrete timelines rather than proof of medical reliability.
AI‑generated content and feedback loops
- Strong worry about Gemini citing AI‑generated YouTube videos: an “ouroboros” of models training on and citing each other’s slop.
- Commenters mention propaganda, conspiracy content, and deliberate attempts to game rankings (e.g., genocide denial, far‑right narratives) and ask how hard it would be to steer LLM outputs by mass‑producing targeted content.
- The concept of “citogenesis” (false claims gaining legitimacy via repeated citation) is raised as a systemic risk.
Broader search and web concerns
- Many feel Google search quality is declining, with AI Overviews and YouTube pushed ahead of cleaner text pages despite a dedicated “video” tab.
- Some argue big tech is turning the public web into a privatized, engagement‑optimized layer where reliable knowledge, especially in medicine, is hard to distinguish from monetized noise.