I Am Tired of AI
AI and Jobs / Adoption Pressure
- Some argue ignoring AI risks unemployment; others with established careers say they can safely avoid it and see “AI or jobless” as fearmongering.
- Many expect white‑collar roles (coding, writing, support) to be heavily automated; others note that in operations/IT they haven’t yet seen jobs lost for not using AI.
- A recurring view: people will use AI even badly, and everyone else will bear the consequences.
Quality and Detectability of AI Output
- Many say AI text has a bland, median “TOEFL essay” tone, overly polite and generic, often obvious on sight.
- Others point out studies showing humans are bad at reliably detecting AI text; “you only notice the bad ones.”
- Some see current AI art/text/music as mediocre, but note that cheap, “good-enough” output can still transform markets.
Copyright, “Theft”, and Training Data
- Large sub‑thread on whether mass scraping for training is “the biggest theft in history” or just copying information.
- Disputes over law: some argue copyright only cares about outputs, not training; others think model weights themselves are derivative works if memorized text can be recovered.
- Many resent that corporations get away with training on others’ work while aggressively enforcing their own IP.
- Split between “abolish or weaken copyright for everyone” vs “tool it (GPL‑style) to force open models or shared benefits.”
Centralization, Capitalism, and Power
- Widespread concern that AI will deepen corporate concentration (OpenAI, Google, Meta) and build moats via exclusive data deals.
- Others counter that open‑weight models (e.g., Llama family) push in a more decentralized direction and may compress profits.
Trust, Information Overload, and “Slop”
- Many feel they can no longer trust new writing, since it may be partially or wholly AI‑generated; this erodes the sense of human connection and “proof of work.”
- Worries that AI is a force multiplier for spam, SEO sludge, propaganda and synthetic reviews, making it much harder and costlier to find reliable information.
- Some argue everything was already full of low‑quality content; AI just changes scale, not nature.
Usefulness vs Limitations of Current Tools
- Enthusiasts report real productivity gains: coding assistants (Cursor, Claude, GPT‑4/o1) for boilerplate, refactors, tests; summarization; translation; quick scripts.
- Common pattern: treat models as a “junior dev” or “bad intern” whose work must be reviewed line‑by‑line.
- Others find tools like Copilot unreliable or net‑negative and feel gaslit by hype.
- Strong consensus that AI is stochastic and must not be used as a trusted oracle or sole decision‑maker.
Ethics, Creativity, and Human Work
- Creators fear devaluation of human writing, art, and tests; some pledge never to use AI and market “100% human” work as a differentiator.
- Others see AI as a powerful editor, idea generator, or on‑ramp, with humans still responsible for taste, intent, and final judgment.
- Thread repeatedly contrasts excitement about technical progress with fatigue over relentless hype, “AI‑washing” of products, and its murky social costs.