Ask HN: Should "I asked $AI, and it said" replies be forbidden in HN guidelines?
Perceived Problem with “I asked $AI, and it said…” Replies
- Many see these as the new “lmgtfy” or “I googled this…”: lazy, low-effort, and adding no value that others couldn’t get themselves in one click.
- AI answers are often wrong yet highly convincing; reposting them without checking just injects fluent nonsense.
- Readers come to HN for human insight, experience, and weird edge cases, not averaged training‑data output.
- Some view such posts as karma farming or “outsourcing thinking,” breaking an implicit norm that writers invest more effort than readers.
Arguments for Banning or Explicitly Discouraging
- A guideline would clarify that copy‑pasted AI output is unwelcome “slop,” aligning with existing expectations against canned or bot comments.
- Banning the pattern would push people to take ownership: if you post it as your own, you’re accountable for its correctness.
- Some argue for strong measures (flags, shadowbans, even permabans) to prevent AI content from overwhelming human discussion.
- Several note that moderators have already said generated comments are not allowed, even if this isn’t yet in the formal guidelines.
Arguments Against a Ban / In Favor of Tolerance with Norms
- Banning disclosure doesn’t stop AI usage; it just incentivizes hiding it and laundering AI text as human.
- Transparency (“I used an LLM for this part”) is seen as better than deception, and a useful signal for readers to discount or ignore.
- Voting and flagging are viewed by some as sufficient; guidelines should cover behavior (low effort, off‑topic), not specific tools.
- In threads about AI itself, or when comparing models, quoting outputs can be directly on‑topic and informative.
Narrowly Accepted or Edge Use Cases
- Summarizing long, technical papers or dense documents can be genuinely helpful, especially on mobile or outside one’s domain, though people worry about over‑broad or inaccurate summaries.
- Machine translation for non‑native speakers is widely seen as legitimate, especially if disclosed (“translated by LLM”).
- Using AI as a research aide or editor is often considered fine if the final comment is clearly the poster’s own synthesis and judgment.
Related Concerns: Detection and Meta‑Behavior
- “This feels like AI” comments divide opinion: some find them useless noise, others appreciate early warnings on AI‑generated articles or posts.
- There’s skepticism about people’s actual ability to reliably detect AI style; accusations can be wrong and corrosive.
- Several propose tooling instead of rules: AI flags, labels, or filters so readers can hide suspected LLM content if they wish.