Malus – Clean Room as a Service
Nature of the Site: Satire, but Uncomfortably Plausible
- Many commenters initially took Malus as real, then realized it is satire from the tone, testimonials, footer, name (“Malus” ≈ “evil”), and FOSDEM talk.
- Others argue that, satire or not, it accurately reflects current incentives and behavior around AI and licenses (“Torment Nexus” vibe).
- Some claim the Stripe payment and actual code generation make it a “real” service wrapped in satire; others insist it’s purely parody. Status is unclear.
Open Source Licensing and “License Laundering”
- Core joke/concern: using AI to clean-room reimplement copyleft/AGPL libraries and relicense them permissively.
- Many see this as an attack on the social contract of OSS: licenses and attribution are the only “payment” maintainers get.
- Some say they’d pull their code offline if it were “washed” from strong copyleft to MIT-style.
- Others view mass reimplementation as ultimately undermining the need for restrictive licenses at all.
Feasibility and Legality of AI Clean-Rooms
- Skeptics note that LLMs are trained on OSS code, so “clean room” claims are dubious; true clean-room would require training on corpora excluding the target, likely impractical.
- Examples are given of LLMs reproducing OSS files nearly verbatim, showing high contamination risk.
- Legal consensus in the thread: traditional clean-room requires provable separation of spec and implementation; AI training muddies this. No clear case law yet.
- Some point out that if such tactics are legal for OSS, they’d logically apply to proprietary software too.
Impact on the OSS Ecosystem
- Fear that widespread license washing will:
- Demotivate maintainers and collapse collaborative infrastructure.
- Push projects to hide tests or key parts, or abandon OSS as a business model.
- Others argue OSS is already heavily corporate-funded and resilient, or that AI-generated reimplementations will mostly hurt large vendors as much as small projects.
Broader Social, Legal, and Ethical Themes
- Debate over accelerationism: some want rapid disruption to force policy responses (e.g., UBI); others fear chaotic inequality and “legal speedruns.”
- Long sub-thread on how enforcement cost shapes law: AI makes some kinds of copying, reverse engineering, and mass legal threats cheap, challenging old assumptions.
- Several worry that satire like this gives bad actors a roadmap; others counter that the underlying ideas are inevitable anyway.