Don't post generated/AI-edited comments. HN is for conversation between humans

Purpose of the new guideline

  • Guideline formally bans generated or AI‑edited comments; intention is to keep HN a human‑to‑human conversation space.
  • Many see this as clarifying an existing norm that bots, bulk paste, and “agent” comments were already unwelcome.
  • Goal is less about technical purity and more about preserving “vibe”: messy, opinionated humans, not engagement‑optimized slop.

Enforcement, feasibility, and culture

  • Wide agreement that airtight enforcement is impossible; AI content already passes as human.
  • Several argue rules mainly shape culture and give moderators a basis to act on obvious abuse and low‑effort flooding.
  • Some worry the rule is “performative” and will mostly punish people honest enough to disclose AI use.

AI assistance vs AI authorship

  • Strong split:
    • One side: any AI rewriting (beyond basic spellcheck) hides voice, flattens style, and shortcuts thinking; if a model chose the words, it’s not your comment.
    • Other side: using AI for grammar, structure, or summarizing sources can help non‑experts, ESL speakers, and “neurospicy” people communicate more clearly; the ideas are still human.
  • Many propose a pragmatic line: research, translation, fact‑checking, or private feedback are fine; copy‑pasted AI prose is not.

Accessibility, ESL, and fairness

  • Several dyslexic, disabled, and non‑native speakers say LLM editing unlocked participation they otherwise wouldn’t have.
  • Others counter that imperfect language is acceptable, and over‑reliance on AI both harms learning and invites unfair judgment.
  • Accessibility carve‑outs are widely suggested but the exact boundary remains unclear.

Detection and “witch‑hunts”

  • Complaints that accusations of “AI slop” are often wrong and more disruptive than the suspected comment.
  • Existing guideline against posting insinuations is highlighted; users are urged to email moderators rather than call people bots in‑thread.

Identity, proof‑of‑humanity, and the future of forums

  • Long subthread on proof‑of‑human systems: biometric orbs, ID‑based tokens, SIM‑like schemes, invite trees, reputation graphs, and cert authorities.
  • Many see these as either dystopian, easily subverted, or fatally anonymity‑eroding.
  • Some predict large open forums will “slop up,” pushing serious discussion into smaller, invite‑only or reputation‑based communities, with a real loss of public, searchable knowledge.

Alternative ideas and concerns

  • Suggestions include: rate limits and link limits for new accounts, bot‑flag buttons, soft algorithmic downweighting of AI‑like text, or mandatory AI‑labeling rather than banning.
  • A minority argue source shouldn’t matter: if a comment is interesting and correct, human vs AI is irrelevant.
  • Others note the irony of an AI‑funding ecosystem wanting to wall itself off from the very slop its tools enable.