My AI Adoption Journey

Overall reaction & tone

  • Many readers praise the post as unusually pragmatic and hype‑free, valuing its honest description of modest but real gains.
  • Some note it matches their own journey: initial skepticism, chatbot disappointment, then gradual usefulness once using agents with structure and constraints.
  • Others remain unconvinced, saying the workflow described doesn’t fit their work patterns or hasn’t yielded value in their own experiments.

How people actually use LLMs/agents

  • Strong agreement on scoping: avoid “draw the owl” mega-prompts; decompose work into small, verifiable, reviewable diffs.
  • “Harness engineering” (AGENTS.md, scripts, test harnesses) is seen as key to avoiding drift and keeping agents on-spec.
  • Several describe using agents as junior devs: they generate code or plans, humans run tests, review diffs, and refine specs.
  • Parallelization is a major advertised benefit: run multiple agent tasks while doing other work, then review results in batches.

Productivity, costs, and evidence

  • Some commenters report large personal productivity gains, especially in boilerplate, test generation, refactors, and research.
  • Others emphasize that reading/reviewing code is the true bottleneck; faster generation can even be a net negative.
  • A cited small METR study found a productivity drop for experienced OS devs using a specific tool; debate ensues over generalizability.
  • Cost is a concern: reports of low-hundreds of dollars per month up to ~$1500–1600/year; some say it’s worth it, others find it prohibitive.

Code quality, review, and safety

  • Many insist that rigorous code review and testing remain non‑negotiable; “vibe coding” without inspection is widely criticized.
  • There’s worry about unmaintainable “AI slop” flooding repos and about organizations prioritizing speed over quality and security.
  • Some use multiple agents/models for cross‑checking, or specialized “review agents” to flag style, security, or performance issues.
  • Tool execution capabilities (file access, shell, HTTP) raise security fears; sandboxing, containers, and tools like syscall guards are recommended.

Craft, skills, and cognition

  • A vocal group argues that AI assistance erodes hard‑won skills (e.g., test writing), and undermines the intrinsic joy of doing the “hard parts” oneself.
  • Others say there is no craft vs AI dichotomy: offload drudgery to spend more time on design, architecture, and interesting problems.
  • Multiple long comments frame AI as threatening the “stare”–the deep, unmediated thinking time where real understanding and innovation happen.

Organizational and ecosystem issues

  • Several note that current success stories are mostly solo or small‑project workflows; it’s unclear how agentic coding transforms large organizations with established review and compliance processes.
  • People complain about rapid model/tool churn and non‑transferable “prompt intuition,” leading some to retreat to simpler, editor‑centric workflows and manual context management.