FOSS in times of war, scarcity and (adversarial) AI [video]
FOSS Freedom and Moral Use
- One camp argues FOSS is fundamentally about user freedom: anyone may use the software for any purpose, including causes creators dislike; this is seen as core to FSF/OSI definitions.
- Others say that treats “freedom” too narrowly. They worry that being indifferent to malicious use (war, oppression, authoritarianism, hyper-capitalism) will eventually destroy the conditions that allow FOSS to exist.
- Some note the classic “paradox of intolerance”: if FOSS communities tolerate all uses, including those trying to suppress openness, they may lose FOSS itself.
- There is tension between Stallman-style focus on user freedom (and labeling proprietary terms as “evil”) and creators who feel they should be able to share code under any conditions they choose.
AI Training, Licensing, and Copyright
- Several propose a “GPL for AI”: if models are trained on FOSS, the resulting weights (and possibly outputs) should be released under compatible licenses.
- Others counter that if training on code is legally “fair use” (especially in the US), licenses cannot restrict it; at best you get lawsuits and settlements, mostly affecting large closed models.
- Skeptics argue viral “AI GPL” rules would mostly hamstring open models (less training data, high legal risk) while big companies continue scraping and paying settlements.
- Some suggest enforcement by countries is necessary; others say that without enforceable copyright, licenses are only polite requests.
- Concern is raised about LLMs ignoring attribution and effectively plagiarizing code or text without respecting original licenses.
War, Geopolitics, and Access
- Discussion about how war changes FOSS: contributions and usage may become nationality-sensitive, with “enemy” countries blocked from projects or platforms (e.g., GitHub blocks).
- This clashes with the idea of FOSS as borderless collaboration; some question whether developers from adversarial states should really be excluded from global projects.
- There is also skepticism about a speaker funded via EU programs criticizing adversarial use while the EU itself funds war efforts.
Security, Trust, and the Limits of Code
- Several commenters see the end of 90s techno‑optimism: we built on assumptions that bad actors were rare; now state-level adversaries are normal.
- Many doubt that licenses can “legislate good use” when AI can reimplement logic and sidestep restrictions.
- Some advocate formal methods, compartmentalization, and architectures that avoid structurally impossible privacy/security, but concede nothing is fully secure.
- Strong view: code alone cannot solve problems of violence and coercion (the “$5 wrench” argument). Only political power and social organization can counter state force; tech can at best augment that.
- Trust is seen as inevitable; the goal is to avoid blind trust via social mechanisms (chains of trust, federated reputation) rather than pure “zero trust.”
Censorship, Privacy, and Children
- One thread separates privacy from censorship: privacy is essential; some censorship (especially for children) is considered necessary.
- Parents describe the near-impossibility of shielding kids from harmful content given algorithms, smartphones, and weak parental controls.
- Others insist adult anonymous speech and “free” sites should remain, suggesting parallel systems: locked‑down ID‑verified spaces for safe content, and open, anonymous spaces for adults.
Future of FOSS and Techno‑Optimism
- Some see FOSS as a product of a past, more utopian era and doubt it could start today amid “ultra‑shark” capitalism and geopolitical conflict; they worry its survival is in jeopardy.
- Others argue FOSS primarily depends on cheap storage/bandwidth and people willing to share code, not on any particular political climate.
- There’s resignation that once code is public, adversaries (including hostile states) can and will use it; control via licenses or norms is limited.
AI Truth and Ethics Debate
- One commenter claims modern AI (with advanced settings) is typically more truthful than humans and can’t be reliably pushed into blatantly unethical or obviously false statements.
- Others reject this, saying LLMs lack any concept of truth, reasonableness, or ethics; they merely emit statistically likely token sequences with no understanding or intent.
Open Source, Politics, and Business Use
- Some argue open source itself is not inherently political or economically “good”; value comes from how it’s licensed and adopted.
- A point of clarification: many businesses write their own core code but do not systematically prefer closed-source over open-source components; rather, they’re cautious about licensing around their core competency.