I don't care how well your "AI" works
Nature of Tools & “Automating Agency”
- One core dispute: are tools value‑neutral or do they embed and shape behavior?
- Examples used: levers enabling monumental architecture and surplus extraction; motorbikes that “want to be ridden dangerously”; nuclear weapons fundamentally altering geopolitics.
- Applied to AI, some argue LLMs are different from traditional deterministic tools: they “automate agency,” replacing the human wielder rather than extending them, primarily to cut labor costs. Others say this logic indicts all tools and isn’t AI‑specific.
AI, Work, and Devaluation of Craft
- Many programmers don’t feel their craft is devalued: seniors report higher pay, less physical strain than other jobs, and large productivity gains from AI assistance.
- Others point to mass layoffs, a collapse in junior roles, and sharply worse hiring conditions, especially in the US and Western Europe.
- Several see a familiar pattern: automation first erases low‑skill / repetitive tasks, then compresses wages and narrows the path to seniority.
- Some frame LLMs as another round of labor discipline: reducing the bargaining power of tech workers rather than wholesale replacement (yet).
Effectiveness and Risks of LLM-Based Coding
- Supporters report large speedups for boilerplate, integrations, refactors, tests, and documentation; LLMs are likened to a “super‑powered search engine” or “smart autocomplete.”
- Critics say AI is often counterproductive: it produces plausible but wrong code, bloated solutions, and unreadable “slop” that seniors must debug, turning them into janitors.
- Concern that “vibe‑coded” systems become instant legacy: no one truly understands the code or its underlying theory, which undermines maintainability and safety.
- Debate over whether people are actually faster: some studies (linked in thread) suggest perceived speedups can mask real slowdowns.
Power, Capitalism, and Surveillance
- Strong faction: AI is structurally designed to centralize control—massive capex, gigantic datacenters, proprietary models—making it an ideal tool for megacorps and authoritarian states.
- Others counter that this is true of most transformative tech (computers, the internet, databases); what matters is ownership, regulation, and open‑source alternatives, not rejecting the tech outright.
- Some argue the real danger is AI used for surveillance, persuasion, and narrative control, not code generation.
Cognition, Learning, and Over‑Reliance
- Anti‑AI voices fear erosion of deep understanding: if juniors outsource learning to LLMs, skills atrophy and real expertise thins out; analogy to skipping “wax on, wax off.”
- Others note we’ve long externalized cognition (writing, calculators, Google) without catastrophe; the issue is how and when we offload, not offloading itself.
- There’s anxiety about tools that are unreliable by design: unlike calculators, LLMs can silently hallucinate.
Hacker Culture and Identity
- Some see “progressive hacker circles” rejecting AI as a betrayal of the classic hacker ethos of curiosity and experimentation.
- Others argue the current AI wave is tightly bound to corporate surveillance and closed infrastructure, so skepticism is in line with hacker values of autonomy and transparency.
- Broader lament that “hacker culture” has been diluted by money, status, and corporate norms; AI becomes another flashpoint in that identity struggle.
Middle-Ground Positions & Futures
- Several commenters advocate a pragmatic stance: treat AI like calculators or IDEs—use it where it clearly helps (summaries, boilerplate, translation, exploratory coding), avoid it where correctness, safety, or learning matter most.
- Others pin their hopes on smaller, local, or open‑weight models as a way to separate AI’s capabilities from corporate control.
- Underneath the polemics, there’s shared uncertainty: no clear consensus on whether AI will expand meaningful work or accelerate its commodification—only agreement that ignoring it entirely is risky, and blind adoption is too.