AI and the ironies of automation – Part 2

Calculator Analogy vs AI Systems

  • Several comments challenge comparing AI to calculators: calculators are deterministic “calculators,” not “thinkers,” and they only go wrong if humans set up the problem wrong.
  • Others note that in real engineering practice even calculators can silently enable catastrophic errors when units, formulas, or orders of magnitude are wrong—only domain intuition catches this.
  • AI is seen as fundamentally different because it can fail in ways that are opaque and non-local, yet still look plausible.

Skill Decay and the Irony of Automation

  • Core theme: as agentic AI takes over execution, human experts risk losing the hands-on skills needed to intervene in rare but critical failures.
  • Maintaining expert competence then requires deliberate, ongoing “practice work,” which eats into the very efficiency gains automation promised.
  • This echoes Bainbridge’s 1983 “ironies of automation”: current systems still ride on a generation that learned to do the work manually; later generations may not.

Human-in-the-Loop, Non‑Determinism, and Oversight

  • LLM-based agents are criticized as non-deterministic and prone to rare but extreme errors (e.g., destructive commands), making unsupervised use unsafe today.
  • There’s concern that as failures become rarer, operators will be more bored and less attentive, yet are still expected to catch subtle, high-impact mistakes.
  • Some argue that where outputs are testable and processes deterministic, AI-generated pipelines can run largely unattended; others counter that LLMs don’t meet those conditions.

Corporate Efficiency, Signaling, and “Automating Shit”

  • Multiple commenters doubt that companies are truly “efficiency-obsessed”; they more often chase the appearance of efficiency and do “good enough” work that accumulates fragility and tech debt.
  • AI fits neatly into this signaling narrative: it’s adopted to look modern and efficient, not necessarily to build robust systems.
  • If a process is already bad, AI just lets you “automate shit at lightning speed.”

Experts as Managers and Orchestrators of AI

  • Experts are expected to transition into managing agents rather than doing the work themselves, a role many find less satisfying and for which they’re rarely trained.
  • In practice, a lot of time still goes into “programming the AI”: specifying goals, constraints, and acceptable changes—more akin to system design than simple oversight.
  • Some suggest intentional “manual time” (e.g., 10–20%) to keep skills sharp, but question whether that still yields real net productivity gains.

Analogies: Aviation, Automotive, Factories

  • Aviation is presented as a model: autopilots handle most flying, but pilots are heavily trained and periodically required to fly manually to prevent skill loss; regulation enforces this.
  • Commenters doubt software organizations will invest similarly in manual practice given short-term delivery pressure.
  • Automotive “levels of autonomy” are used as a metaphor: current AI coding tools feel like Level 2–3—most dangerous, with shared control and murky responsibility.
  • Others note that factory automation has succeeded despite operators no longer knowing fully manual operation; expertise migrated into process engineering.

Current Practical Limits of AI Tools

  • Outside coding, several experiences with document/PDF tools show frequent silent failures: dropped rows, duplicated data, truncated search contexts, and very confident but wrong answers.
  • Non-technical users are especially at risk of trusting such outputs without understanding limitations or needed validation.

Creativity, Culture, and Training Data Ecology

  • Some worry AI-generated content is “polluting the commons” of cultural data and displacing paid creative work, threatening future training data quality and creative ecosystems.
  • Debated whether paying clients actually prefer human-made art or will accept AI for cheaper, generic digital assets (e.g., stock-ish illustrations).
  • There’s concern that cultural output could converge toward low-cost, model-shaped “junk food,” undermining artists’ livelihoods and shrinking entry paths into creative fields.

Discipline, Atrophy, and Individual Use

  • A number of commenters report feeling skill atrophy or a strong reflex to “just ask the LLM,” likening avoidance of over-reliance to diet or exercise discipline.
  • Others frame AI as reducing friction and helping start projects they’d otherwise never begin, while insisting they skip AI when they truly need deep understanding or correctness.