Breaking the spell of vibe coding

Using AI Coding Assistants vs Building Fundamentals

  • Several commenters describe a “both/and” path: learn architecture, DDD, patterns, and low-level concepts while learning to work with AI assistants.
  • Others warn that heavy reliance on LLMs erodes fundamentals: you stop thinking about edge cases, error paths, and lose “taste” and mental ownership of the code.
  • Some report the opposite: AI use increases their exploration of edge cases and enables them to build systems they previously couldn’t as a solo dev.
  • General agreement that domain knowledge and architectural judgment are still human responsibilities; LLMs excel at local implementation, not system design.

DDD, Design Patterns, and “Real” Software Engineering

  • Mixed views on Domain-Driven Design:
    • Fans see it as a useful philosophy for modeling domains and boundaries, not a rigid pattern.
    • Critics call it over-engineering that often degenerates into complex, hard-to-debug spaghetti.
  • Many prioritize “evergreen” skills: software design, data-intensive systems, operating systems, concurrency, and diverse paradigms (FP, actors, Smalltalk, etc.) to better guide and evaluate AI-generated code.
  • Common theme: we’ve automated “coding” but not “software engineering”—abstraction design, modularization, and complexity management remain weak even in human projects.

How to Use LLMs Effectively

  • Productive use is described as a learnable skill (prompting, planning, context management, tool choice), though some argue the difficulty is overstated compared to learning to program.
  • Spec-driven and agentic workflows are debated:
    • Proponents cite big productivity gains with structured commands/agents.
    • Skeptics say maintaining detailed specs becomes unwieldy at scale, and LLMs struggle with implicit requirements.
  • Several advocate using AI mostly for planning, rubber-ducking, and small, well-bounded tasks rather than large opaque code dumps.

Risk, Productivity, and “Vibe Coding”

  • One axis of debate: which is riskier—using AI too much (bugs, security, skill atrophy, loss of code familiarity) or too little (missed productivity, future unpreparedness)?
  • Others reject the “Pascal’s wager” framing, arguing for incremental validation: start with small, low-risk use cases and expand based on evidence.
  • A cited study found devs using AI felt ~20% faster but were actually ~20% slower; commenters dispute the author’s percentage math but not the basic perception–reality gap.

Dark Flow, Addiction, and Workflow Changes

  • Multiple people resonate with “dark flow”: AI agents make it so easy and fast to prototype that they struggle to stop or sleep; some compare it to slot-machine dopamine.
  • Others note that immersive, sleepless coding sessions long predate AI, but agents’ constant asynchronous activity removes natural pausing points.

Future Trajectory and Hype

  • Some see AI coding as a short-lived bubble; others believe improvement will continue and that ignoring it now presents real career risk.
  • Counter-argument: as tools improve, they’ll get easier and more commoditized, making intense early specialization less critical.
  • Claims that some companies now have “100% AI-written code” are noted; skepticism remains about marketing hype and about executives’ FOMO-driven adoption.
  • Broad but not universal consensus: AI is a powerful tool; abandoning core engineering discipline and review because of it is dangerous.