Vibe coding kills open source

What “vibe coding” is and what the paper actually claims

  • Commenters see “vibe coding” as agent/LLM-driven development that assembles OSS libraries with minimal human reading of docs or code.
  • Several people call the title “kills open source” clickbait; the paper’s own summary is more nuanced: productivity up, but user–maintainer engagement down.
  • One author clarifies “returns” means any reward to maintainers (money, reputation, jobs, community status), and argues those fall faster than productivity rises.

Disagreement over incentives and funding for OSS

  • Many push back on the paper’s premise that OSS is “monetized via engagement”: most projects make no money; serious funding comes from enterprises, consulting, grants, or “open core.”
  • Others note engagement still matters even there: stars, docs traffic, issue reports and conferences drive sponsorships, consulting, and enterprise adoption.
  • Some think vibe coding mainly harms “marketing funnel” OSS (frameworks, dev tools) whose docs and community are used to sell pro versions or services.

Maintainer experience: less signal, more slop

  • Several maintainers report two opposite effects:
    • PRs and issues drying up as users ask LLMs instead of searching for libraries or filing bugs.
    • Or the reverse: projects drowning in low‑quality, obviously AI‑generated PRs and issue comments, increasing review burden.
  • Concern that AI lowers the barrier to “I’ll just write my own” and to low‑effort drive‑by contributions, reducing collaboration and maintainability.

Developer anecdotes: strengths and limits of AI coding

  • Strong enthusiasm for:
    • Rapid prototyping, internal tools, and throwaway scripts.
    • Reviving abandoned projects, merging divergent forks, researching obscure build errors.
    • Overcoming decision paralysis and boilerplate; using agents as code reviewers or design critics.
  • Strong skepticism for:
    • Deep, domain‑specific design and semantics (e.g., geocoding ambiguity).
    • Large, complex systems (kernels, databases, browsers), where correctness, architecture, and long‑term maintenance dominate.
  • Many say effective use requires heavy scaffolding: design docs, tests, curated context, documentation, and treating LLMs more as reviewers and explainers than as primary authors.

Effects on the OSS ecosystem: fragmentation, revival, and norms

  • Optimists expect:
    • More small, purpose‑built OSS tools; easier resurrection of abandoned code; better ergonomics and UIs for niche projects.
    • New workflows: AI triage of PRs, AI‑enforced style/tests, richer contribution models, possibly new VCS/hosting primitives tied to chat/CoT and CI results.
  • Pessimists fear:
    • Fragmentation into many overlapping, lightly‑maintained “vibe‑coded” projects.
    • Maintainers losing motivation as recognition signals (stars, issues, visible usage) shrink and as AI consumes their work without attribution.

Licensing, IP, and training‑data feedback loop

  • Several note a paradox: LLMs owe their capabilities to OSS, yet may undermine the incentives that created that corpus.
  • Concerns include:
    • LLMs as “clean‑room” IP laundering machines (e.g., re‑implementing GPL‑licensed designs under permissive licenses).
    • Weakening of copyleft and contributor‑license expectations when origin and license of generated code are opaque.
  • Suggestions include license‑aware coding tools and provenance/SBOM‑like tracking for generated snippets, but no clear solution appears.

Future of software: bespoke apps vs shared infrastructure

  • One camp predicts a surge of bespoke, per‑user or per‑team apps (“3D printer for software”), with LLMs generating the 10% of features each user actually needs.
  • The opposing camp stresses enduring value in:
    • Standardized, battle‑tested infrastructure (kernels, DBs, Redis, ffmpeg, etc.).
    • Interoperability, long‑term maintenance, and real‑world hardening that one‑off vibe‑coded tools cannot match.
  • Several suggest the likely outcome is evolutionary: more AI‑assisted “prompt programming” for glue and niche tools atop a still‑shared OSS substrate, not a wholesale replacement of that substrate.