AI should only run as fast as we can catch up
Pace of AI Progress and Impact on Developers
- Some argue AI will outstrip all human programmers within a few years and eliminate a large share of software jobs, driven by huge economic incentives.
- Others call this irrational extrapolation, noting similar claims since GPT‑3 and warning about assuming exponential improvement instead of a plateau.
- There’s disagreement on whether current models are already “good at coding”: many say yes in absolute terms; others say they still fail badly in complex, real-world codebases.
Quality, Reliability, and “Nondeterminism”
- Several point out that AI-generated code is often superficially plausible but wrong in subtle ways, especially in large legacy systems.
- A long side-thread clarifies that LLMs are theoretically deterministic; what matters is reliability, not determinism. Sampling and batching make API behavior appear nondeterministic.
- The key concern: AI outputs lack the guarantees we expect from compilers, type systems, and tests.
Verification, Testing, and “Verification Debt”
- Many agree the core issue is verification asymmetry: AI can generate huge amounts of code faster than humans can confidently review.
- People predict “verification debt” will surpass traditional tech debt without strong automated tests, workload simulation, previews, and organizational standards.
- TDD, formal verification, strong type systems, and platform-enforced patterns are highlighted as ways to make “spot‑checking” meaningful. Others feel this is just old QA/TDD ideas being rediscovered under an AI banner.
Practical AI Coding Workflows
- AI shines on small, greenfield, well-structured projects; struggles with large, messy monoliths and microservice sprawl without careful context management.
- Effective patterns: method-level generation, AI-assisted refactors, AI-written tests for human-written code, and iteratively building AI-readable documentation.
- Some envision future roles where developers act more like product/verification managers over AI agents; others warn about over-reliance and hidden complexity.
Human Expertise, Overtrust, and Other Domains
- Multiple comments stress that AI amplifies existing skill: experts can judge and steer it; novices can’t reliably tell good from bad output (code, config, or world‑peace advice).
- Overtrust is seen as dangerous; anecdotes show people treating AI as an oracle, even in gambling.
- Visual design is used as a counterexample to the claim that “everyone can verify images”: trained designers see many issues non-experts miss.
Superintelligence, Alignment, and Utopias
- Some dismiss AI-utopian or AI-doom narratives as sci‑fi fanfiction lacking a theory of power or realistic alignment path.
- Others argue alignment may be extremely hard or unsolved, and that a truly superintelligent system might pursue goals misaligned with human autonomy.