Don't fall into the anti-AI hype

Open Source, Licensing, and “Stolen” Code

  • Many commenters feel that LLM training on OSS is de‑facto license violation: GPL/AGPL intent is that derivatives remain copyleft and attribute authors; AI outputs let companies “launder” that work into closed, unattributed code.
  • Others counter that copyright protects expression, not ideas, and that most LLM output is non‑verbatim and thus likely non‑infringing under current law. Several point to US “idea–expression” doctrine and existing tests for derivative works.
  • There’s concern that if courts accept “fair use” training, traditional OSS protections become unenforceable: no meaningful way to opt out, no way to detect misuse, and no path to compensation.
  • Some OSS authors are fine with permissive use (MIT/BSD mindset), see AI as another user, and care mainly about disclaimers or minimal attribution. Others say they’ll stop publishing OSS altogether.

Business Models, Tailwind, and Open Core

  • Tailwind is cited as a cautionary tale: AI reduced docs traffic and (reportedly) can reproduce paid components, undermining an “open core + paid UI” model.
  • Broader worry: AI makes “freemium OSS + paid extras” fragile, accelerating a shift either to fully closed source or to OSS as largely unpaid hobby work.
  • A minority argue OSS was always economically shaky; AI just exposes pre‑existing “tragedy of the commons”.

AI Coding Quality, “Vibe Coding,” and Maintainability

  • Strong split in lived experience: some report 5–10x throughput with coding agents (especially for boilerplate, TDD harnesses, ports against test suites), others say they spend as long fixing AI output as writing from scratch.
  • Several horror stories: agents committing secrets, deleting home directories, adding large, dead or subtly wrong code, and generating shallow or misleading tests.
  • Experienced users stress that good results require:
    • Very detailed specs and constraints,
    • Tight human review,
    • Strong automated tests, and
    • Understanding of what models can’t do (global architecture, nuanced domain rules).
  • Critics argue this just turns devs into editors of opaque, probabilistic “slop,” eroding deep understanding and long‑term maintainability.

Jobs, Power, and Economic Uncertainty

  • Many see AI as “labor theft”: past OSS and paid work suddenly has new value as training data without compensation, while companies talk about shrinking engineering teams.
  • Others argue productivity gains historically increase demand for software, but there’s no consensus this time; some expect fewer, more leveraged dev jobs and worse inequality.
  • UBI is discussed but seen as politically unlikely and insufficient without broader changes (debt, taxation, market power).

Hype, Anti‑Hype, and Adoption Pressure

  • One camp: not learning AI tools now is career malpractice; effective use is a deep skill that compounds over years.
  • Another camp: tools, models, and workflows change so fast that “early adopter advantage” is overstated; better to wait for stabilization and clearer business economics.
  • Several note a crypto‑like vibe: massive investment, unclear sustainable revenue, and risk of a sharp correction even if the tech itself persists.

Broader Social and Ethical Concerns

  • Recurrent themes: centralization of compute and models in a few giants, environmental and energy costs, surveillance and “programming as a subscription,” and the use of AI in propaganda and workplace monitoring.
  • Some distinguish “AI as a genuinely useful tool” from “AI as a business and political project,” supporting the former while opposing the latter’s current trajectory.