The Future of Comments Is Lies, I Guess
LLM Spam, Moderation, and HN Mechanics
- LLMs are seen as a major new spam vector; existing defenses like karma, rate limits, and downvotes help but are imperfect and can also bury controversial but correct content.
- Some note that popularity-based ranking may actually favor LLM output, which is optimized for engagement.
- There’s sympathy for moderators: most large platforms already host low-quality content, and LLMs will likely amplify that, especially high-quality, persuasive spam and scams.
Dystopia, Fraud, and Trust Breakdown
- Several commenters express “dystopia vibes”: LLMs enable profitable phishing of previously unviable targets and sophisticated fraud (e.g., deepfaked video calls authorizing large transfers).
- Worries extend to all digital communication becoming untrustworthy, feeding arguments for mandatory digital identity and, in turn, more control and censorship.
- Others see a long-standing trajectory: more information, more garbage; LLMs just accelerate it.
Anonymity, Identity, and Web of Trust
- A central debate: should the internet “ditch anonymity” once human vs LLM output is indistinguishable?
- Pro-identity arguments: use PKI / web-of-trust plus reputation to prove “real humans,” reduce spam, bullying, and misinformation via permanent bans.
- Counterarguments:
- De-anonymization enables political repression, chilling effects, and doesn’t actually stop harassment or misinformation—only shifts tactics.
- Verification is expensive, spoofable (with deepfakes), and risks centralizing sensitive ID data.
- Some advocate pseudonymity with third‑party identity providers and chains of trust, others insist on preserving anonymous spaces.
Economic Levers: Raising the Cost of Spam
- One thread focuses on economic solutions: raising the cost of spam worked for web/email (HTTPS, phone/2FA).
- Proposed measures include small per‑comment fees or ID/payment requirements; critics note content farms and spammers will simply pay if still profitable, while genuine users bear friction and risk unjust bans.
LLMs as Moderation Tools
- Some argue LLMs should also assist moderation: detecting spammy commercial content, harassment, off‑topic posts, or categorizing comments (argument vs information vs anecdote).
- Skeptics point out LLMs don’t “know” truth, can’t reliably judge nuanced fallacies, and may encode bias—yet even coarse tools could drastically improve low-end comment sections.
Fate of Comments and Communities
- Several foresee mainstream comment sections shutting down or becoming unreadable, with meaningful discussion retreating to smaller, registered, heavily moderated or ID-verified communities.
- Others are less alarmed, arguing online discourse was already heavily constrained and propagandistic; LLMs merely force people to question authority and information sources more critically.