Ask HN: How to deal with long vibe-coded PRs?

General stance on huge PRs (AI or not)

  • 9k LOC / dozens of files is widely seen as unreviewable and bad engineering practice.
  • Common recommendation: reject outright or close with an explanation; PR size alone is a valid reason.
  • Acceptable exceptions: purely mechanical refactors, codegen, or migrations with strong tests and clear scope.

What reviewers expect instead

  • Break into stacked, self‑contained PRs (often 150–400 LOC, rarely >1k).
  • Start from a ticket/design/RFC so reviewers already understand intent and architecture.
  • Use feature flags or integration branches to land work incrementally without exposing incomplete features.
  • Require authors to:
    • Review their own code first.
    • Explain requirements, design choices, and why complexity (e.g., a DSL) is needed.
    • Provide tests, coverage, and a test plan.
    • Be able to walk through the code live.

How to respond in practice

  • For coworkers:
    • Say “we don’t work this way, please split this” and, if needed, schedule a long walkthrough to surface the true cost.
    • Escalate to managers if pressured to accept unreviewable changes; some say they’d look for a new job if forced.
  • For open source:
    • Close with a short, canned explanation and a link to contribution guidelines; suggest starting with smaller issues.
    • Do not feel obligated to spend personal time on massive drive‑by PRs.

AI as cause vs. AI as tool

  • One view: origin (AI vs human) is irrelevant; only quality and size matter.
  • Opposing view: AI has created a new class of low‑effort “slop” and drive‑by contributors who don’t understand their own PRs.
  • Concerns: time asymmetry (hours to generate vs days to review), security/malware risk, duplicated or over‑engineered code, long‑term maintainability.
  • Some orgs ban or strictly flag LLM‑generated code; others accept it but treat it as junior‑level work.
  • “Fight slop with slop”: use LLMs to summarize, pre‑review, split commits, and surface obvious issues, but humans still own the final decision.

Cultural and process takeaways

  • PRs are collaboration and shared responsibility, not “someone must check my work.”
  • Large AI PRs without author understanding are seen as disrespectful of reviewer time.
  • Clear, documented policies on PR size and AI usage make these rejections easier and less personal.