AI should elevate your thinking, not replace it

Perceived decline in engineering skill (before and after AI)

  • Many argue “engineers who can’t think” have always existed; AI mostly gives them a new crutch, similar to old copy‑paste from StackOverflow.
  • Others say degrees and titles already overstate competence; AI makes it harder to detect weak engineers because it produces plausible output.
  • Some see modern “software engineering” as lightweight plumbing or bureaucracy rather than rigorous engineering.

AI-assisted coding: two main usage patterns

  • Productive pattern: use AI to remove drudgery (boilerplate, lookups, examples), while retaining ownership of design, reasoning, and review.
  • Risky pattern: treat AI as an abstraction layer or “ghostwriter” that produces and even explains code and designs; engineers become a “front‑end to Claude/ChatGPT.”

Skill atrophy, learning, and juniors

  • Strong concern that juniors will skip the painful learning loop (debugging, design, reading docs) and never build real intuition or judgment.
  • Counterpoint: every generation leans on new tools (calculators, IDEs); skills you truly need will be maintained, others legitimately atrophy.
  • Several suggest keeping AI out of early education or using it only as a tutor, not as a coder.

Abstraction vs black box: compilers, libraries, and LLMs

  • Many reject the “LLMs are just the next abstraction like compilers” analogy:
    • Compilers are deterministic, specified, auditable; LLMs are stochastic, underspecified, and inconsistent.
    • You rarely inspect assembler, but you must inspect AI output, so it doesn’t really free cognitive load in the same way.
  • Others say in practice people are treating LLMs like non-deterministic compilers or agents, often without adequate review.

Productivity, volume, and code quality

  • AI greatly speeds up boilerplate and exploration; some claim 10x+ productivity or the ability to juggle many more projects.
  • Reviewers report being overwhelmed by large, low-quality AI PRs; volume encourages “rubber-stamp” reviews and hidden bugs.
  • Teams describe degradation of systems when they start “doing what the AI suggests” uncritically, then pausing to reset standards.

Org pressures, hiring, and incentives

  • Management often pushes for AI usage and output metrics, even when quality drops, and may overestimate AI reliability.
  • Some foresee a class of employees who mostly sit in meetings and YOLO AI code for years, shielded by org politics.
  • Hiring becomes harder: AI lets candidates fake competence; interview loops may need to focus more on reasoning than polished answers.

Debate over what “engineering” is

  • Long thread on whether most software work qualifies as “engineering” in the rigorous, accredited sense.
  • Some argue real engineering rigor exists only in niches (aviation, medical, safety‑critical); most software is ad hoc and economically tuned.
  • Others note that even traditional disciplines often do pragmatic, low‑rigor work; software is not uniquely unserious.

Analogies: calculators, GPS, exoskeletons, social media

  • Pro‑AI side: like calculators or IDEs, AI frees you from low‑level details so you can tackle harder problems.
  • Skeptical side: LLMs differ because they’re non-deterministic, unbounded in domain, and can replace reasoning itself, not just arithmetic.
  • Many worry about “cognitive atrophy,” comparing LLM dependence to GPS destroying sense of direction or smartphones eroding attention.

Experiences and usage patterns

  • Some seniors report feeling more mentally taxed: they must constantly steer, critique, and constrain verbose models.
  • Others say AI restored joy by removing tedious parts and letting them focus on architecture, invariants, and domain modeling.
  • A recurring line: if AI vanished tomorrow, could you still design, debug, and maintain your systems after a few years of tool dependence?

Meta: AI-written arguments about AI

  • Multiple commenters felt the linked essay itself “reads like AI,” and a detector flagged it as such; the author (in-thread) said they only used AI for editing and critique.
  • This sparked a side concern: over-reliance on AI detectors and the difficulty of trusting authorship and intent in an AI-saturated discourse.