No right to relicense this project
Project rewrite and relicensing
- Library’s v7.0.0 is a near-total rewrite produced in a few days with an LLM and relicensed from LGPL to MIT while keeping the same name, repo, and version history.
- Many see this as “license-washing”: trying to escape copyleft obligations while retaining accumulated reputation and ecosystem position.
- Others argue a full rewrite with a different internal architecture and similar API can be a new work, and thus legitimately MIT-licensed.
Derivative work vs. clean-room implementation
- One side claims any rewrite by people heavily exposed to the original LGPL code (and using an LLM trained on it) is presumptively a derivative work, so must remain under LGPL.
- Counterpoint: copyright law does not require a “clean room”; exposure alone doesn’t prove infringement. What matters is whether protectable expression was copied.
- There’s disagreement over burden of proof: some say accusers must show substantial similarity; others argue the maintainers effectively admitted derivation by keeping the name, API, and version lineage.
AI-generated code and copyright status
- Several commenters note recent rulings that purely AI-generated works are not copyrightable (at least in the US), raising questions whether v7 code can be licensed at all or is effectively public domain.
- Others push back that humans guiding AI may still be authors and, separately, that AI output can still be a derivative work of training data.
- There is concern that if courts accepted LLM rewrites as “original,” this would effectively gut copyright and copyleft for software.
Ethics, governance, and open source norms
- Many see the move as ethically wrong even if it were legal: a maintainer treated as a trustee for a community project is perceived as unilaterally changing the social contract.
- Suggested “proper” approach: create a new project and name, or obtain explicit relicensing consent from all prior contributors.
- Debate over GPL/LGPL: some call them “problematic” licenses; others argue they work as intended to keep improvements free and defend end-user rights.
Security, quality, and ecosystem risk
- Huge one-shot AI rewrite (hundreds of thousands of lines deleted and replaced) is viewed as a potential supply-chain hazard: impossible to properly review, test coverage changed, CI initially broken.
- Claims of “drop-in” compatibility are disputed: tests from v6 show behavior and encoding labels differ in practice.
- Broader concern: core dependencies in ecosystems like Python being silently replaced with unvetted AI-generated code.