AI is destroying open source, and it's not even good yet

AI vs Crypto / NFT Comparisons

  • Several comments compare the AI boom to crypto/NFTs: same hype and spam, but with more obvious practical utility.
  • Others stress that underlying crypto tech (ledgers, ZKPs) and NFTs have narrow but real uses, just as LLMs do, while the investment mania is disconnected from actual value.

Impact on the Internet and Content Quality

  • Many say the web was already being degraded by ad-driven platforms and SEO spam; AI simply accelerates the trend.
  • LLM-generated “slop” sites make search results worse and harder to filter than pre-LLM SEO farms.
  • Some argue LLMs aren’t “destroying the internet” so much as exposing pre‑existing structural problems in content economics.

Maintainer Experience and “AI Slop”

  • Maintainers report a surge in large, untested, AI-generated PRs and bogus bug/vuln reports, often submitted for bounties, résumé fodder, or “I contributed to OSS” clout.
  • Reviewing becomes more expensive: plausible-looking changes, weak understanding, no tests, and LLM-written replies to review comments.
  • Some projects are disabling PRs, closing bug bounties, or moving toward “open source, not open contribution” models.
  • Crawling by AI scrapers (commit-by-commit, not just clones) is described as a constant resource drain.

Defenses, Gating, and Reputation

  • Suggested mitigations: disable PRs, limit to known contributors, require pre-issue discussion, quizzes or CONTRIBUTING gates, reputation/karma systems, or even email-based PRs.
  • Others warn these measures erode the “anyone can contribute” ethos and may push OSS toward walled gardens and cathedral-style development.

Optimistic Uses of AI in OSS

  • Individual devs report 5× productivity on personal projects, easier experiments, and better test suites with AI assistance.
  • Some maintainers say agents helped revive stagnant projects or handle tedious testing.
  • Proposals: donors fund token usage so maintainers can turn money directly into features via agents; agents triage PRs and bug reports. Skeptics doubt the economics and current code quality.

Licensing, “Information Theft,” and Compensation

  • Strong sentiment that mass training on OSS without consent is “information theft” and that AI firms should be taxed/forced to compensate maintainers.
  • Debate over whether LLM output is copyrightable, GPL‑compatible, or effectively public domain; consensus in thread: legal status is unclear.
  • Several broaden the critique: AI is “data fracking” harming many commons—OSS, StackOverflow, Internet Archive, OpenStreetMap, journals—via scraping and fake submissions.

Skills, Learning, and Legibility of Merit

  • Frequent complaint: low‑skill users wield LLMs without understanding, becoming Dunning–Kruger exemplars who trust slop and flood others with it.
  • Some use AI as a tutor and helper, insisting on self‑review and tests; they see AI as a powerful accelerator of learning, not a replacement for it.
  • Because so much code is now AI‑assisted, open‑source activity is viewed as a weaker proxy for actual engineering skill, and some prefer in‑house rewrites over trusting small third‑party projects.