Show HN: Respectify – A comment moderator that teaches people to argue better
Overall concept and intended use
- Tool analyzes comments before posting, flags issues (toxicity, dogwhistles, spam, off-topic, fallacies) and suggests rewrites.
- Developers frame it as a “nudge and educate” system, especially for bloggers who want comments but fear toxicity.
- They emphasize configurability: site owners can tune thresholds (e.g., dogwhistles, sexual content) rather than enforcing one global standard.
False positives and dogwhistle overreach
- Many users report absurd flags: “Christmas party” as a Christian dogwhistle, “Of course it is!” as off-topic, “horrible people” as inherently wrong.
- Dogwhistle detection is widely seen as oversensitive and context-blind; it initially mislabels benign statements, especially around religion and race.
- Developers repeatedly acknowledge this and say they “dialed it way down” during the thread.
Perceived political bias and echo-chamber risk
- Multiple tests on UBI, Trump, Obama, and transgender-rights topics suggest stricter treatment of certain viewpoints.
- Example: “Obama sucks” is flagged as racist dogwhistling; “Trump sucks” is not. Some pro-Trump or anti-UBI comments are hard to get approved even when civil.
- Critics argue this bakes in ideological bias, launders particular politics as “respect,” and risks turning communities into echo chambers.
Quality of rewrites and “LLM-speak”
- Suggested revisions are often described as mushy, over-equivocating, or meaningless, and sometimes alter the original meaning.
- Users worry about timelines being filled with samey, sanitized corporate/LLM tone, encouraging self-censorship and “algo-speak.”
Limits against bad-faith actors
- Several commenters argue the premise is flawed: real bad-faith actors are often eloquent, strategic, and will simply adapt or use the tool to better mask harassment or propaganda.
- Some see it as enabling sealioning or “laundering” bigotry into polite form.
Alternative ideas and use cases
- Suggestions:
- Use it as a pre-post self-check or browser plugin rather than gatekeeper.
- Focus more on logic, evidence, structure, and fallacies than on sentiment.
- Rank or hide low‑quality/angry content instead of blocking it.
- Create “discussion arenas” for vetted good‑faith participants.
- Personal blocklists and user-side filters are proposed as a more agency-preserving alternative.
Philosophical and practical objections
- Many see it as paternalistic, dystopian, or a step toward algorithmic speech control / “social credit.”
- Concerns about normalization of AI moderation, chilling honest speech, and creeping censorship.
- Operational issues noted: slow site, timeouts, unstable outputs, privacy policy gaps, and potential for abuse (e.g., fine-tuning better spam).