We might have been slower to abandon StackOverflow if it wasn't a toxic hellhole
Traffic decline and role of LLMs
- Commenters note the question graph peaks around 2014 and declines long before modern LLMs; LLMs are seen as a “final nail” rather than sole cause.
- Explanations for pre‑LLM decline include: saturation (most common questions already answered), stagnation and competition, worsening Google search, and decreasing fun/usefulness of participation.
- Others reject the “saturation” story, pointing to fast‑changing tech and more programmers; they argue decline reflects community problems, not that all questions were answered.
- A separate metric cited: percentage of answered questions has fallen from >90% early on to ~35% in recent years.
Toxicity, strictness, and user experience
- Many describe fear of being chastised, aggressive duplicate closures, and rapid downvoting of new users (“be nice” banner alongside -5 scores) as driving them away.
- Some recount questions closed as duplicates of subtly different or obsolete posts (e.g., jQuery, old .NET/Android APIs), or genuinely useful operational/devops questions marked off‑topic.
- Others defend the ethos: SO was explicitly designed as a curated knowledge base for future searchers, not a help desk; excluding low‑quality and duplicate questions was considered essential.
- Several say terse, critical feedback improved their question‑asking skills and that strict curation prevented Reddit‑style endless repeats; they see later “toxicity” complaints as coming from users unwilling to do minimal research.
Moderation, governance, and gamification
- There’s sharp disagreement over moderators and high‑rep curators:
- Critics describe power‑tripping, status‑seeking “roomdwellers” and heavy‑handed company staff; they say closing/flagging became an end in itself and blocked correction of wrong/outdated answers.
- Defenders stress that most closures are by subject‑matter experts, not staff; duplicate closure is framed as necessary curation, not meanness, and “toxic” often just means users’ expectations didn’t match the site’s model.
- Gamification (reputation, badges) is widely seen as misaligned with the site’s goals, encouraging quantity over long‑term maintenance and making it hard for new, better answers to displace old, highly‑voted ones.
Outdated answers and duplicate policy
- A recurring complaint is that strict duplicate policy routed users to very old answers that were technically correct once but now harmful or irrelevant; updating them is socially and mechanically hard, so practical accuracy degraded over time.
- Defenders respond that new answers should be added to old questions rather than new questions opened, but acknowledge the tooling and voting dynamics made this work poorly.
LLMs, data, and the future
- Some worry that as SO (and similar sites) fade and more content is AI‑generated, LLMs will increasingly train on AI slop, creating a feedback loop and weakening future models.
- Others argue models can rely on source code, documentation, and logs from developer‑AI interactions, and that modern LLMs can already answer from fresh context without needing SO.
- There is disagreement about how strongly LLM capability is tied to curated Q&A vs general corpora, and about current hallucination/error rates.
Broader lessons about communities
- Several commenters see SO as an example of how large communities drift toward toxicity without constant, expensive, transparent moderation; others point to Wikipedia/HN/Lobsters as counterexamples that show it is possible but growth‑limiting.
- Some insist SO “wasn’t a toxic hellhole” for them and that many critics were simply blocked from being lazy; others recall sarcasm and rudeness to beginners as pervasive and say those users left and didn’t come back.