AI overviews cause massive drop in search clicks
Google’s incentives and business model
- Many see AI Overviews and “AI mode” as Google’s attempt to keep users on Google pages, even at the expense of outbound clicks and the broader web.
- Debate on monetization: some expect ads/product placement inside AI answers; others say that’s hard to do without destroying trust, which may explain why it’s not fully rolled out yet.
- Some argue this is defensive: Google is trying not to lose search share to standalone LLMs, even if it risks cannibalizing search ads.
- Others note “money queries” (commercial searches) remain mostly link‑based, so ad revenue hasn’t yet crashed, but informational queries are being absorbed by AI.
User experience: why people use or avoid AI Overviews
- A large group finds the overviews convenient for quick factual queries (“what temp is pork safe at”, flight numbers, definitions), avoiding SEO spam, cookie walls, and hostile UX.
- Another group disables or hides overviews (query hacks,
udm=14, userscripts, adblock filters) because they prefer raw results or distrust the summaries. - Many compare AI Overviews to reading Hacker News comments instead of the article: a faster but potentially distorted shortcut.
Accuracy, safety, and hallucinations
- Numerous reports of blatantly wrong answers: fabricated game hints, wrong population numbers, unsafe cooking temperatures, incorrect legal/medical/organizational info.
- Overviews sometimes contradict their own cited sources or misinterpret simple pages; some say they hallucinate more than high‑end LLMs.
- This creates real‑world harm: misdirected calls to businesses, users arguing about event fees or policies based on AI, and confusion in health and legal contexts.
- Some believe non‑technical users over‑trust AI outputs, while others say the general public is more wary than technologists assume.
Impact on publishers, SEO, and the web’s economics
- Many publishers and small sites report traffic drops of ~40–50%, threatening ad‑funded content and niche hobby/educational sites.
- Concerns that Google is “stealing” their work: ingesting pages into models, answering on‑page, and intercepting both traffic and revenue.
- Fear of a “dead internet” loop: less incentive to create quality content → less fresh training data → more AI‑generated slop → overall quality spiral.
- SEO is morphing into “GEO”/LLM‑SEO: optimizing content to be quoted by AI engines instead of merely ranked in blue links.
Workarounds and alternative tools
- Users share tactics to avoid AI: profanity triggers,
-aiin queries,udm=14parameter, custom Firefox search engines, CSS/JS to hide the overview. - Increasing interest in alternatives like Kagi, Perplexity, Marginalia, library‑style or local LLMs; praise for Kagi’s paid, ad‑free model and per‑site up/down‑ranking.
- Cloudflare’s “pay per crawl” and robots controls are cited as early mechanisms to charge or block AI crawlers.
Long‑term concerns: data, law, and information ecosystem
- Open questions about how future models will get training data if web content becomes paywalled, blocked, or economically unsustainable.
- Worries that AI providers dodge liability by blaming “the algorithm” while now clearly publishing their own synthesized content.
- Legal and ethical debates around defamation, libel, business harm, and whether AI platforms should be treated as publishers rather than neutral intermediaries.
- Some hope this collapses ad‑driven SEO sludge and revives passion‑driven, non‑monetized “small web”; others fear only propaganda and commercial content will remain worth producing.