Ask HN: Please restrict new accounts from posting
Concerns about new accounts and spam
- Many participants report a sharp rise in low‑effort posts and comments from “green” (new) accounts, often perceived as LLM‑generated “slop” or promotion.
- Some see patterns of dormant or aged low‑karma accounts suddenly activating with similar low‑quality content.
- Others note that HN has historically handled spam well but the scale and style of AI‑assisted activity has made moderation harder.
AI‑generated content and detection
- Strong sentiment that obviously LLM‑generated comments and Show HNs should be bannable; moderators confirm generated comments are generally grounds for bans.
- Disagreement on detectability: some say blatant LLM style is easy to spot; others warn about false positives and note humans now imitate LLM style.
- Debate over whether to allow LLM‑assisted writing (especially for non‑native speakers) versus fully generated “zero‑effort” content.
- Several argue constant “this is AI” comments are themselves low‑value noise unless there’s clear evidence or actionable labeling.
Show HN quality and “AI slop”
- Many feel Show HN quality has dropped: more vibe‑coded repos, Potemkin projects, and LLM‑generated READMEs and landing pages.
- Others argue apparent quality is higher but effort per project has fallen due to AI, raising expectations about what counts as “impressive.”
- There is worry that genuine, high‑effort projects (especially by new users) may be dismissed as AI‑generated.
Policy changes and proposals
- Moderation has already begun restricting Show HN submissions from new accounts; intent is to require some prior participation.
- Suggested mechanisms:
- Age/karma gates for submissions or downvotes.
- Lower flag thresholds for killing posts from new accounts.
- Vouching / invite or web‑of‑trust systems.
- Proof‑of‑work, captchas (including “reverse” ones), or small monetary costs.
- User‑side filters: hiding green/low‑karma accounts, muting specific users, browser extensions, “LLM spam” flags.
- Critics note determined spammers can age and farm accounts; added friction may mostly hurt legitimate newcomers and privacy‑conscious users.
Openness vs. community health
- One camp prioritizes human‑only, high‑effort conversation and is willing to add friction and risk some false negatives.
- Another fears echo chambers, loss of “author shows up” moments, and death by over‑restriction, drawing parallels to Reddit’s heavy automoderation.
- Some conclude that perfect filtering is impossible and users will increasingly need personal tools and reputation/trust systems to navigate an AI‑polluted internet.