AI Resistance: some recent anti-AI stuff that’s worth discussing
Overall sentiment and spread of AI resistance
- Commenters disagree on how widespread anti‑AI feeling is.
- Some report mostly enthusiasm or pragmatic use in everyday life, especially outside tech hubs.
- Others see strong hostility, especially in online, younger, or arts communities, and on certain platforms (e.g., Reddit vs X).
- Several argue tech workers are unusually anxious because they “see how the sausage is made” and feel more directly threatened.
Jobs, capitalism, and inequality
- A large cluster worries AI will accelerate job loss, especially white‑collar work, without any credible path to safety nets like UBI.
- Left‑leaning critics say AI is being used to deepen wealth concentration: automation replaces workers while ownership and profits remain with a small elite.
- Some push back that productivity gains historically improved living standards; others counter that recent decades show rising inequality and stagnant real security.
Existential vs near‑term risks
- Thread notes diverse “anti‑AI” groups:
- Some fear superintelligent “unaligned” systems causing human extinction or massive die‑off.
- Many more focus on nearer harms: enshittified services, biased decisions, deepfakes, surveillance, and reckless deployment of mediocre systems into critical roles.
Data scraping, copyright, and “information wants to be free”
- Strong resentment toward large labs scraping public content without consent or compensation.
- Others argue training on public data is analogous to humans reading books, and expanding copyright to block training would be inconsistent with earlier fights against DRM.
- There’s tension between historical “information should be free” attitudes and a newer desire to withhold or poison data to resist corporate AI.
Model poisoning and data quality
- Some are excited by poisoning as an attack surface and form of resistance; suggest targeting low‑value, niche topics to undermine trust with minimal corporate incentive to fix.
- Skeptics say:
- Training data is increasingly curated; bad or obviously synthetic content is filtered.
- Public attacks can be used to train detectors, making defenses easier than attacks.
- One‑off hoaxes (fake diseases, fictional TV plots, “Fortnite doesn’t exist” jokes) often affect retrieval and search layers more than base models.
- There’s debate over whether overfitting, double descent, and “model collapse” make large models fragile or surprisingly robust.
Historical analogies and Luddism
- Some liken AI resisters to Luddites or early car opponents and predict they’ll fail to slow adoption.
- Others counter that resistance has sometimes worked (nuclear bans, cloning, GMOs) and argue AI is uniquely centralized, coercive, and widely hated compared to the internet or smartphones.
- Several emphasize original Luddites opposed how owners used machines to worsen labor conditions, not technology itself.
Real‑world use, “slop,” and hidden adoption
- Visible “AI slop” (spammy marketing, low‑effort content, hallucinations) fuels backlash and mistrust.
- Commenters note much impactful use is invisible: coding assistance, documentation, internal tools, process automation – changes likely to continue regardless of public sentiment.
- Some see AI as overhyped “cheap bullshit at scale”; others as genuinely transformative but currently misused and overmarketed.
Governance, corporate power, and leadership
- Many distrust major AI CEOs; their public remarks about massive job losses and “inevitable” deployment are seen as provocative and galvanizing resistance.
- There’s interest in “responsible AI” middle ground, but pessimism that venture and geopolitical incentives favor maximal, centralized deployment over cautious, public‑interest use.