The threat is comfortable drift toward not understanding what you're doing
Scope of concern: training people vs producing outputs
- Many agree the real “product” of PhD programs is capable scientists, not papers; agents that do the work for students undermine that goal.
- Central worry: students like “Bob” can complete projects via agents without internalizing the methods, while “Alice” builds transferable intuition by struggling through the work.
- Some point out this distinction is badly understood in industry, where shipping artifacts is rewarded and process/understanding is often ignored.
Tools, abstraction, and forgotten skills
- One camp says LLMs are just the next abstraction layer, akin to compilers, calculators, or higher-level languages; society routinely delegates low-level skills.
- The opposing camp says the analogy is flawed: calculators/compilers are deterministic and well‑specified; LLMs are probabilistic, often wrong, and require prior expertise to supervise.
- Many note we still withhold calculators early in math education; by analogy, LLMs should be withheld until fundamentals are solid.
Reliability, hallucination, and supervision
- Repeated examples of LLMs fabricating plots, coefficients, and references; outputs can be polished but wrong.
- Consensus that current models are only safe when a domain expert can independently evaluate and cross‑check results.
- Disagreement over trajectories: some think hallucinations will shrink to human‑like bug rates; others argue error modes are structural to current architectures.
Economic and career impacts
- Some think “if Bob can do things with agents, he can do things,” and markets will reward that; others respond the market will still need—and eventually only pay well for—what agents can’t do.
- Fear that most programmers become “McDonald’s cooks” assembling AI slop, with pay and status to match, while a small elite retains deep skills.
- Debate over whether AI companies and APIs are genuinely profitable vs VC‑subsidized, and what happens if costs rise or a bubble bursts.
Education, assessment, and possible countermeasures
- Suggestions: emphasize oral defenses, in‑depth questioning on every step/figure, and exams that LLMs can’t sit for students.
- Some propose using AI strictly as a tutor/hint‑giver, not as an answer generator; others note real students often want shortcuts and will offload as much as allowed.
- Worry that widespread AI use makes it harder to distinguish genuine expertise from AI‑mediated performance until much later.
Long‑term societal risks
- Several foresee “knowledge chains” breaking: fewer people able to rebuild complex systems (science, lithography, operating systems) from first principles.
- Concern that models trained on AI‑generated slop will degrade over time.
- Others argue specialization and partial ignorance are already the norm, and civilization has long advanced by expanding what we can do without thinking about it—though how far this can stretch with LLMs remains unclear.