Redox OS has adopted a Certificate of Origin policy and a strict no-LLM policy

Rationale for the no‑LLM / Certificate of Origin policy

  • Main concern is review burden: LLMs make it cheap to produce superficially plausible code, but expensive to review, especially for complex systems like an OS kernel.
  • Maintainership time is scarce and unpaid; filtering out low‑effort, AI‑generated “slop” is seen as essential.
  • Some view it as legal risk management given unsettled copyright status of LLM output and potential GPL “taint.”

Enforceability and “honor system”

  • Many argue the ban is technically unenforceable; you can’t reliably distinguish high‑quality LLM code from human code.
  • Others counter that most rules rely on attestation and social consequences: contributors sign a certificate of origin, and lying can justify bans.
  • Policy text targets content “clearly labelled” as LLM‑generated; ambiguous or “submarine” use is handled case‑by‑case, with lying framed as a serious breach of trust.

Impact on contributors and OSS culture

  • Some foresee fewer drive‑by contributions and a shift to trusted, pre‑vetted contributors or even default‑deny for outside PRs.
  • Others worry this erodes traditional “send a small PR” entry paths and favors clique‑like communities or people willing to hang out in chats.
  • There is debate on whether forbidding LLMs is fair to non‑native speakers or those using “autocomplete on steroids”; some say those uses are minor, others say the policy doesn’t clearly differentiate.

Views on LLM usefulness and risks

  • Pro‑LLM camp reports large productivity gains, especially with modern “agentic” tools and strong test harnesses. They see hand‑coding everything as soon to be a niche, hobbyist choice.
  • Skeptics say LLMs produce verbose, inconsistent, or subtly wrong code and reviews/tests still dominate the time cost. They predict mountains of tech debt and unreliable systems.
  • Split on whether an OS can reasonably be built without “massive” LLM use: some say history proves yes; others say modern scope and unpaid labor make that unrealistic.

Legal and licensing concerns

  • Disagreement over whether LLM output is copyrightable, a derivative work, or “tainted” when trained on GPL code.
  • Some projects explicitly treat LLM‑generated code as presumptively tainted; others argue this is over‑cautious or conceptually wrong.

Alternative proposals

  • Allow LLM use but require contributors to:
    • Fully understand, explain, and stand behind the code.
    • Provide prompts and audit trails with PRs.
    • Use LLMs only for docs, translation, or small edits.
  • Some suggest maintainers should generate code with their own agents instead of reviewing strangers’ AI‑produced patches.
  • Expectation that forks using LLMs will proliferate; whether they surpass “artisanal” upstreams is seen as an open question.