"Just Fucking Ship It" (Or: On Vibecoding)

Security failures in the app

  • Commenters are stunned that a production iOS app for teens/kids shipped with:
    • Hardcoded OpenAI keys
    • Wide‑open Supabase backend with full access to user data
  • Several highlight the severity: nearly a thousand minors’ photos, ages, and live locations were exposed, calling it “criminal” and close to gross negligence.
  • Some note Supabase does surface security advisories, but they are seen as noisy and not very actionable.

Responsible disclosure and ethics

  • Debate centers on whether the blog post is “responsible disclosure” or a harmful “how‑to exploit” guide.
  • One side: given the seriousness (children’s data) and the developer’s initial reluctance to fix things, public shaming and pressure are justified, even necessary.
  • Other side: the tone is smug and vindictive, and detailed exploitation steps arguably made the kids more vulnerable; the author should have escalated to Apple before publishing.
  • Later comments note the post was temporarily taken down, updated with a disclosure timeline, and the researcher began working with the developer to remediate.

AI, “vibecoding”, and software quality

  • Many see this as a case study in “vibecoding”: using LLMs/Cursor/Claude Code to ship quickly without understanding basics like key management or security.
  • Some compare it to past waves (PHP, early Node) where newcomers produced insecure apps; they argue the solution is better tools and education, not gatekeeping.
  • Others say LLMs are qualitatively different: non‑technical people can now ship general‑purpose software at scale, often without caring about correctness or safety.

LLM agents, Supabase, and data access

  • Discussion branches into Supabase’s MCP/agent story and prompt‑injection risks.
  • One camp: tools/agents are fine if you sandbox them, give least privilege (e.g., read‑only prod, limited writes), and treat them as dev tools.
  • Opposing view: as long as agents autonomously act on untrusted input, secure automation is fundamentally fragile; better to use LLMs inside constrained, predefined workflows.

Platforms, incentives, and broader industry

  • Several criticize Apple for approving the app at all while taking a 30% cut, arguing App Store review and “kids app” rules failed here.
  • Others generalize: VC and AI hype reward speed and revenue over safety, and the internet is increasingly filled with insecure “slop” that will create lots of cleanup and security work.