AI assistance when contributing to the Linux kernel
Policy overview and intent
- Linux kernel policy: AI-assisted code is allowed, but:
- Only humans may sign the Developer Certificate of Origin (DCO).
- The human submitter is fully responsible for correctness, licensing, and ongoing maintenance.
- An
Assisted-by: AGENT_NAME:MODEL_VERSION [TOOL1] [TOOL2]tag is recommended, and may list both AI and non‑AI tools.
- Many see this as a “boring, sane” policy that treats AI as just another tool while making responsibility explicit.
Liability and legal responsibility
- One side: responsibility should rest with the human who submits code; tools (including AI) have no legal or moral agency.
- Others argue liability may still extend to the Linux Foundation and large distributors, since AI copyright issues are a foreseeable risk and DCOs may not hold up in court.
- DCO is viewed as liability mitigation, not a shield; distributors of infringing code can still be sued, though lack of intent may reduce penalties.
- Some think the liability problem is overblown compared to long‑standing risks from human contributors; others expect future legal tests.
Copyright, licensing, and AI training
- Concern: models are trained on mixed-license and proprietary code; contributors cannot realistically guarantee GPL‑compatibility or non‑infringement.
- Debate over whether AI output is:
- Public domain / uncopyrightable (making GPL enforcement murky), or
- Copyrightable by a human if there is “sufficient human creative input.”
- Public domain and GPL interaction is discussed: PD code can be combined into GPL code, but upstream PD status remains.
- Disagreement over independent-creation defenses for AI‑assisted code vs humans, and whether regurgitation of training data is common or a rare “bug.”
Practical impact on development and review
- Some worry about “AI slop” contributions from people who don’t understand the code, using AI just to boost résumés.
- Others note the same problem already exists with low‑quality human contributions; what matters is review quality and tests.
- AI attribution is seen as useful for:
- Auditing and future cleanup.
- Understanding tool usage patterns.
- Possibly tracking systemic issues in AI‑generated code.
- Concerns that review bandwidth won’t keep up if AI accelerates patch volume; subtle bugs may slip through even with “clean” AI code.
Community attitudes and broader ethics
- Strong polarization:
- Some regard AI as inevitable and essential for productivity; refusing it is seen as self‑handicapping.
- Others are viscerally opposed, threatening boycotts or forks over any AI‑assisted kernel code.
- Ethical worries: mass scraping of open source without consent, erosion of attribution, corporate control, widened inequality, and potential erosion of open‑source licensing power.