LLM code generation may lead to an erosion of trust

Onboarding, Learning, and Use of LLMs

  • Disagreement over banning LLMs for juniors: some say onboarding complexity is an important learning crucible; others argue LLMs excel at environment setup, code search, and summarization and withholding them is counterproductive.
  • Several note that tools can either accelerate real understanding (when used by people who reflect on solutions) or enable copy‑paste behavior with no learning—LLMs amplify both patterns.

“AI Cliff” and Context Degradation

  • Multiple commenters recognize the described “AI cliff” / “context rot” / “context drunk” phenomenon: as conversations get long or problems too complex, models start thrashing, compounding their own earlier mistakes.
  • Workarounds mentioned: restarting sessions, pruning context, summarizing state into a fresh chat, breaking work into smaller steps, or using agentic tools that manage context and run tests.
  • People differ on severity: for some it’s a frequent blocker; others mostly see it when “vibe coding” without feedback loops or taking on problems that are too large in one go.

Trust, Heuristics, and Code Review

  • Central theme: LLMs make it harder to infer a developer’s competence from the shape, style, and explanation of their patch.
  • Previously, reviewers used cues like clear explanations, idiomatic style, commit granularity, and past behavior to decide how deeply to review. With LLMs capable of producing polished code and prose, those shortcuts feel less safe.
  • Some argue this is healthy—heuristics were never proof and reviewers should fully verify anyway. Others say the practical cost is high: more exhaustive reviews, no “safe” shortcuts, and burnout.
  • There is debate over process vs outcome: one camp wants to prohibit or flag LLM‑generated code to preserve trust; the other insists only the final code and tests should matter, regardless of tools.

Quality, Verification, and Documentation

  • Many note that LLM‑assisted code often has more bugs, over‑engineering, and complexity unless actively constrained and refactored by an experienced engineer.
  • Increased reliance on LLMs is said to demand stronger testing and QA, but some doubt tests and AI “judges” (with ~80% agreement to humans in one cited claim) are reliable enough.
  • Several complain of LLM‑written emails and documentation: fluent but muddy, overcomplicated, and often missing key nuance, which erodes trust in polished text generally.

Open Source vs Industry Trust Models

  • Commenters highlight a difference between open source and corporate teams:
    • FOSS projects rely heavily on interpersonal trust and reputation; LLMs undermine the ability to map code quality to contributor skill, raising review burden.
    • In industry, many see LLMs as just another productivity tool: if something breaks, teams patch it, blame is diffuse, and trust is more tied to process (tests, reviews, velocity) than individual authorship.

Skills, Cognition, and Inevitable Adoption

  • Recurrent analogy: LLMs as calculators, excavators, or cars—tools that atrophy some skills while massively increasing throughput. Some welcome that tradeoff; others fear cognitive decline and “vibe programmers” whose skill ceiling is the model.
  • Many believe resisting LLMs outright is futile; the realistic path is to learn them deeply, constrain their use, and build processes (tests, review norms, toolchains) that acknowledge their failure modes.