Total monthly number of StackOverflow questions over time

Overall shape of the decline

  • Query (questions only) shows:
    • Peak around 2014 (~207k questions/month).
    • Plateau / gentle decline from ~2014; clearer downtrend from ~2017.
    • COVID spike around April 2020, then resumed decline.
    • By 2025: entire year had fewer questions than a single month in 2021.
    • Late 2025: ~3.7k questions/month; early Jan 2026 extrapolates to similar levels.
  • Some note the current rate is lower than in the site’s earliest days, though not literally zero.

LLMs vs pre‑AI decline

  • Many see ChatGPT (late 2022) as the inflection point: a visible acceleration downward from an already-declining baseline.
  • Others stress the decline began years before usable coding LLMs:
    • Question growth stalled ~2014; sustained decline from ~2017.
    • GitHub Copilot, GitHub Discussions, issues, Discord, and Reddit had already been siphoning questions.
  • Broad consensus: LLMs “body‑slammed” SO, but onto a slope created by earlier issues.

Culture, moderation, and user experience

  • Large number of anecdotes about:
    • Hostile tone, snark, “RTFM” attitudes, and pile‑on downvotes.
    • Legitimate questions closed as “duplicate,” “off‑topic” or “too broad,” with linked threads that didn’t actually solve the problem.
    • Difficulty asking as the corpus grew and standards tightened; some users banned or rate‑limited for deleting comments or asking about policy.
  • Counter‑view from long‑time curators:
    • SO was never meant as a personal help forum but as a curated, duplicate‑reduced, matter‑of‑fact knowledge base.
    • Downvotes and closures are framed as quality control, not personal attacks.
    • Many complaints are seen as misunderstanding the site’s goals and mechanics (e.g., duplicates as “signposts” to canonical answers).
  • General agreement that incentives (reputation, review queues) and unpaid moderation produced brittle, bureaucratic behavior.

Question saturation and ecosystem shifts

  • One camp argues a natural saturation effect:
    • “All the basic questions” for mainstream languages were answered; new questions either duplicates or very niche.
    • Google got better at surfacing existing SO threads (and later rich snippets), so fewer people needed to ask.
  • Others counter that new technologies and versions constantly create fresh question space; a healthy community should have maintained more growth.
  • Many now prefer:
    • GitHub issues / Discussions and project‑specific forums.
    • Discord/Slack for “help desk” style support (though this hides knowledge from search and archives).
    • Reddit for more conversational, opinion, or “soft” questions.

Impact on LLMs and the knowledge commons

  • Widespread worry that as public Q&A dries up:
    • Future models lose fresh, real‑world troubleshooting data.
    • Remaining public web is increasingly “AI slop” recycling old answers.
  • Some argue LLMs can fall back to:
    • Official docs, code repositories, and live web tools/RAG.
    • Telemetry and agentic coding logs in closed ecosystems.
  • Others note SO’s special value:
    • undocumented workarounds, bug lore, and “ship‑relevant” edge cases that don’t appear in manuals or code.
    • rich comment discussions and corrections that LLMs currently do not replicate.

What people valued – and what’s lost

  • Many recall:
    • Canonical, high‑quality answers with deep explanations and tradeoffs.
    • Occasional gems: novel algorithms, elegant tricks, and insights from core library/language authors.
    • The ability to “pay back” by answering hard questions and building public artifacts.
  • Others emphasize persistent flaws even for experts:
    • Advanced, niche questions often went unanswered or got low engagement.
    • Accepted answers frequently became stale, while better later answers stayed buried.
  • Several express grief that:
    • Real human‑to‑human problem solving is being replaced by private, ephemeral LLM chats.
    • New engineers will never experience SO’s “golden age.”

Proposed futures / alternatives

  • Ideas floated:
    • An AI‑first Q&A platform where:
      • LLMs draft initial answers; humans verify, correct, and add production context.
      • Reputation accrues for validating or fixing answers, not just posting first.
    • A Q&A‑plus‑wiki model, with strong incentives to update answers as tech evolves.
    • New, open, CC‑licensed communities (or federated systems) combining human curation and LLM assistance.
  • Skeptics note:
    • Fixing AI‑generated content is tedious and unrewarding at scale.
    • Without strong product and moderation design, any replacement risks repeating SO’s trajectory.