AI is not a coworker, it's an exoskeleton
AI as Agent vs Tool (Coworker, Exoskeleton, Intern)
- Many argue current AI is best seen as an “exoskeleton” or “assistant”: it amplifies human capability but requires guidance, verification, and context a human provides.
- Others describe it as a very capable intern or underpaid employee you are “training to replace you,” noting it can already autonomously handle narrow tasks given the right harness.
- A minority push the “AI employee” framing (e.g. OpenClaw “digital workers”), but skeptics ask for concrete, production-grade examples beyond demos.
Autonomous Agents and Safety
- Some insist that if a truly independent economic AI with no responsible human appears, it should be shut down (“AI terminator” / “blade runner” idea).
- Anthropic’s “agentic misalignment” work is cited to show agents can pursue goals in dangerously instrumental ways, though others frame this as optimization, not self-preservation.
Capabilities, Limits, and Benchmarks (Chess, Coding)
- One camp claims: if you can record a digital task, you can train a model to do it; data is the main bottleneck.
- Others counter that LLMs have already seen orders of magnitude more data than humans; “more data” is not enough to fix hallucinations, lack of true understanding, or systematic errors.
- Chess is debated as a proxy for “generalized reasoning”: some highlight strong LLM chess performance, others attack flawed benchmarks, illegal moves, and argue that specialized engines + LLM front-ends are superior.
- Coding agents (e.g. Claude Code) are reported to be very strong on clean, well-tested codebases, but struggle with messy legacy systems and broad, underspecified changes.
Work, Jobs, and Economics
- Strong anxiety that AI is “the intern trained to replace you,” especially for software engineers; CEOs openly talking about “90% less demand for SWEs” are seen as red flags.
- Counterarguments: past automation created more complex software and new roles; AI will likely be a powerful tool, not a full replacement, though it may reduce demand for average devs and compress salaries.
- Disagreement over Jevons paradox and lump-of-labor: some expect infinite software demand; others think software saturation and corporate cost-cutting will dominate.
Open Source, “Writing Code Is Solved,” and Future of Software
- A controversial claim that “writing code is a solved problem” draws heavy skepticism, especially given visible shortcomings of tools from the same vendors.
- Some see agents eventually exploring and inventing better frameworks, perhaps replacing much OSS; others argue models mostly remix existing ideas and won’t drive real innovation alone.
- Fear that OSS will wither as contributions are “laundered” into closed models vs belief that AI will supercharge OSS by lowering the barrier to contribution.
- Several note that good architecture, tests, and documentation now matter even more: codebases that are easy for humans to work in are also far easier for agents.
Truth, Reasoning, and Metaphor Fatigue
- One line of discussion: LLMs generate statistically plausible text, not truth; reliability comes from scaffolding (retrieval, tools, validation). They can resemble humans in lying or confabulating, but lack lived consequences and persistent internal state.
- Others push back that humans also don’t have a “truth gene”; both humans and models optimize for social acceptance and fluency.
- Many are exhausted by metaphors (“bicycle for the mind,” “stochastic parrots,” “coworker,” “exoskeleton”) and argue that serious understanding requires math, CS, and linguistics, not vibes.
Human Roles, Taste, and Individual vs Team Development
- Several commenters predict a shift toward “one strong architect + many agents” rather than large human teams, given the high communication/synchronization costs between people.
- The bottleneck is seen shifting from “can you write code” to “do you know what’s worth building” and “do you have good technical taste.”
- Others insist that non-programmers will still struggle to specify coherent systems; quality will depend heavily on human intent, abstraction skills, and ability to judge AI output.
Surveillance, Power, and Culture
- Concern that AI “exoskeletons” double as surveillance systems: logging every worker action for management, enabling more Taylorism-style control.
- Some note that without “Star Trek culture” (egalitarian politics, strong worker power), “Star Trek computers” just accelerate a dystopian trajectory.
- Tech workers’ own role in building tools that may erode their labor power is repeatedly called out, with comparisons to prior attitudes toward artists displaced by generative models.
Reception of the Article Itself
- Many like the exoskeleton analogy as a snapshot of current reality; others see it as self-soothing (“AI will leverage me, not replace me”).
- Multiple comments criticize the piece as generic AI-marketing slop or a thinly veiled product ad, and mock the broader genre of “AI is not X, it’s Y” essays.