An AI Vibe Coding Horror Story

General reaction to the story

  • Many commenters are alarmed but not surprised; they see this as an early warning of much worse incidents to come.
  • Some compare “vibe coding” with historic tech fads (Visual Basic, Excel apps) where non-experts built critical systems that later failed.
  • Others think the writeup is so high-level and vague that it reads like “internet fiction,” though several insist similar things are definitely happening.

AI “vibe coding” vs professional development

  • Disagreement over whether hiring consultants would be safer: some say pros would use battle‑tested platforms and avoid obvious mistakes; others argue plenty of human‑written systems are just as bad.
  • Several note that LLMs can produce superficially decent code (schemas, password hashing) while missing basic operational/security practices (e.g., backups in web root).
  • Consensus that vibe coding is acceptable for prototypes, small utilities, or personal tools, but dangerous for high‑stakes domains like healthcare and finance unless an expert audits everything.

Security, privacy, and legal liability

  • Strong view that this is not about CI or tooling but about unreviewed AI output handling sensitive data.
  • Multiple comments urge reporting such systems to data protection authorities (GDPR/DSGVO, Spanish AEPD, CNIL, Ireland’s DPC, etc.), noting some regulators are “brutal.”
  • Anecdotes of similarly insecure systems: open Wi‑Fi exposing law firm file shares, an insurance CRM, a surgeon’s web app leaking backups and credentials.
  • Debate over responsibility: AI vendors for hype and misleading marketing vs. non‑technical users who deploy systems beyond their competence.

Regulation and professionalization of software

  • Lengthy debate on creating software engineering professional bodies with accreditation and personal liability, analogous to civil engineering, medicine, or law.
  • Supporters say high‑risk software (medical devices, health CRMs, pacemakers) should require licensed professionals who can be sanctioned for negligence.
  • Opponents argue this would be rent‑seeking gatekeeping, harm open source and hobbyism, and that many safety laws already exist but are under‑enforced.

Broader implications for AI tooling

  • Some expect future “agent‑native” dev and security tools that automatically set up safer architectures and deployments.
  • Others stress that AI’s competence is “spiky”: it can do hard things but misses obvious pitfalls, and that non‑technical users lack the intuition to even ask the right security questions.
  • Overall sentiment: AI is a powerful but sharp tool; without expertise and proper incentives, more incidents like this are likely.