A look at Cloudflare's AI-coded OAuth library

Meaning and Drift of “Vibe Coding”

  • Several commenters argue “vibe coding” originally meant AI‑ or copy‑pasted code that humans do not meaningfully review.
  • Others see it more broadly as focusing only on whether something “seems to work” and not inspecting the code itself.
  • Some think expanding the term to mean “not battle‑tested” or just “normal imperfect code” makes it useless marketing jargon.

Cloudflare OAuth Library: Bugs and Review Claims

  • The blog post highlights issues: overly permissive CORS, incorrect Basic auth, deprecated implicit grant, weak token randomness, and limited tests.
  • One side infers this shows humans offloaded responsibility to the LLM.
  • Others push back: Cloudflare’s own README claims thorough human review against RFCs; missing bugs shows fallible review, not total abdication.
  • A Cloudflare engineer joins to defend some design choices (e.g., CORS as safe for bearer‑token endpoints, token randomness as secure though not maximally efficient) and notes the LLM did not invent the higher‑level crypto design.
  • The discovery of a biased token generator still makes some lose confidence in the review quality, even if the bug isn’t practically exploitable.

OAuth and Security Complexity

  • Multiple commenters note OAuth is notoriously tricky; even heavily tested commercial implementations have had hundreds of security bugs.
  • The takeaway for some: this is exactly the kind of domain where deep expertise and exhaustive testing (including all spec MUST/MUST NOTs and abuse cases) are mandatory, regardless of LLM use.

LLMs as Coding Tools: Productivity vs. Subtle Bugs

  • Practitioners report ~2× speedups for short, throwaway tasks and ~10–20% on larger, long‑lived codebases.
  • They also report many subtle bugs—especially in concurrency, error handling, security, and “looks right” defaults.
  • LLMs are compared to power tools: great accelerators for experts, dangerous in unskilled hands.

Need for Domain Expertise and “Automation Bias”

  • Many stress that LLMs are most valuable when the user is already an expert who can specify and review output.
  • There’s concern that normalization of AI assistance will increase automation bias: reviewers will trust AI output too readily, especially under time pressure.
  • Worries extend to the career pipeline: if juniors lean on LLMs instead of learning fundamentals, where do future domain experts come from?

Learning and Information Quality

  • Some say LLMs are “rocket fuel” for learning when paired with high‑quality sources, references, and critical verification.
  • Others counter that LLMs frequently fabricate plausible‑sounding details and citations, which is especially dangerous for novices who can’t spot errors.
  • There is broad anxiety that AI‑generated content will pollute search results and documentation, making reliable information harder to find and freezing in old Stack Overflow patterns.

Testing, Multi‑Agent Review, and Comments

  • Several suggest AI should be used heavily to generate tests and critique specs/code, possibly with multiple models cross‑checking each other.
  • Skeptics note tests themselves can be wrong or gamed, and subtle bugs can still slip through.
  • Redundant line‑by‑line comments in the Cloudflare repo are seen as an LLM “tell”; some find them useless noise, others think they’re still better than the typical under‑documented human code.