AI is an impediment to learning web development
State of “Modern” Web Development
- Many see contemporary web stacks (React, Next.js, massive dependency trees) as overcomplicated for most sites; classic stacks (HTML/CSS, minimal JS, PHP, etc.) still work and are often saner.
- Others argue the churn has largely settled (React + Postgres as de facto standard) and that one can simply ignore new fads.
- Some say the web’s real problems are ads and tracking, not frameworks, though others separate the business vs. tech issues.
LLMs as Learning Tools vs. Impediments
- Strong concern: LLMs are harmful for “0→1” learning. Beginners can’t distinguish right from wrong output; they get plausible but incorrect code and never build mental models.
- Supporters say LLMs are transformative for self‑learning: interactive explanations, quick overviews, step‑by‑step help, and contextualization beats static docs or search.
- Several liken this to calculators: indispensable once fundamentals are learned, but problematic if introduced too early.
- Automation bias is a recurring worry: as tools get better, people over‑trust them and disengage from reasoning.
Code Quality, Maintenance, and Professionalism
- Many report AI‑generated code that “works” but is off‑kilter: non‑idiomatic, brittle, missing edge cases, hard to maintain.
- Fear that LLMs will accelerate production of “slop,” widen the gap between strong engineers and others, and increase security/edge‑case bugs.
- Emphasis from some that professionals must not commit code they don’t understand; LLMs are fine for boilerplate, tests, small glue code, not for entire solutions.
- Others counter that much code has always been bad; LLMs mostly speed up existing copy‑paste/StackOverflow behavior.
How People Actually Use LLMs
- Positive patterns: rubber‑ducking, conceptual explanations, quick syntax reminders, scaffolding small components, refactoring support, test generation.
- Negative patterns: inline autocomplete that distracts thinking, wholesale generation of features in unfamiliar stacks, using AI to bypass learning and assignments.
- Several practitioners deliberately disable or restrict code generation when learning something new to preserve deep understanding.
Broader Reflections
- Some see complaints as gatekeeping or elitist; LLMs let non‑experts build useful things they otherwise never would.
- Others worry about long‑term cognitive atrophy, attention erosion (analogous to social media), and the difficulty of teaching in a world where AI tools are ubiquitous.