The fate of "small" open source

AI slop, spam, and the degraded web

  • Many see AI as “industrializing” existing bad behaviors: spammy tutorials, phishing, scraped blogs, low‑effort PRs, SEO garbage.
  • Others argue this era already existed pre‑LLM; AI just changes the flavor, not the underlying problem.
  • YouTube and web search are described as increasingly overrun by AI‑generated content; some imagine “human‑only” or paid, curated services as an escape, but doubt this can scale or be reliably enforced.

Search engines, SEO, and AI summaries

  • Some praise AI search summaries for bypassing clickbait and SEO slop for simple queries.
  • Others say summaries are often subtly wrong, strip context, and make it harder to judge source quality; “slop vs condensed slop.”
  • Strong disagreement over whether Google “nerfed” search intentionally for ads versus just losing the fight to SEO. Internal-ad-economics stories are cited as evidence of deliberate degradation.

Fate of small / micro open source libraries

  • Many distinguish between genuinely useful small tools and trivial micro‑dependencies (e.g., “left-pad”–style utilities) that arguably never made sense.
  • Several see LLMs as the final nail in the coffin for these micro-libs: developers can just generate a 10‑line helper instead of adding a dependency.
  • Others counter that mature utilities (e.g., Apache‑style commons) encode years of bugfixes and edge cases; LLM‑generated code is “instant legacy” with unknown behavior.
  • Vendoring tiny snippets or header‑only style libs is praised as a middle ground, though critics worry about updates, security, and licensing.

Open source maintenance, spam, and gatekeeping

  • Maintainers report a surge of AI‑generated PRs/issues from contributors who don’t understand the project, treating maintainers as free QA.
  • This is framed as a “care” problem amplified by AI: low‑effort code at much higher volume.
  • Proposed responses: stricter vetting, filters (possibly AI‑based), closed or “cathedral” contribution models—at the cost of making FOSS more gatekept and less welcoming.

Motivations, licensing, and training data backlash

  • Some creators now refuse to open source new work (or release binaries only) to avoid it being used as free AI training data, seeing current AI as one‑way extraction with privatized gains.
  • Suggestions include copyleft/AGPL to deter corporate use, or “source‑available” licenses, though many doubt this will meaningfully stop scraping.
  • Others argue that broad reuse—including via models—is aligned with the original spirit of free software and that obsession over attribution misses the larger benefits.

Education, documentation, and learning

  • Concern: LLMs shift culture toward “instant answers” over deep understanding; small OSS and blog posts once served as educational material.
  • Counterpoint: LLMs can be superb tutors—patient, interactive, and able to explain code or docs at arbitrary depth. Some projects now ship LLM‑friendly documentation (e.g., llms.txt‑style outputs).
  • There’s skepticism about whether people will study LLM‑generated inlined code more than they ever read code buried in dependencies; careless developers may read neither.

Broader outlook

  • Some think open source will remain central and even get stronger as motivated developers use AI to tackle more ambitious projects.
  • Others foresee burnout: rising spam, corporate control of ecosystems (package hosts, search), and a sense that anything open will just be harvested into proprietary models.
  • A shared theme: the real scarcity is care and high‑quality human attention; AI can either free that up for harder problems—or flood it with even more low‑value noise.