We're losing our voice to LLMs
LLMs as Editing Tools vs. Ghostwriters
- Many distinguish between using LLMs as editors (spellcheck, tone check, clarity) versus as generators of full text.
- Several posters share workflows where they draft in their own words, then use LLMs for light corrections, stressing they reject most stylistic suggestions to preserve “voice.”
- Others warn about a “feedback loop” where constant LLM-polishing gradually irons out quirks, idioms, and personality, especially for non‑native speakers or neurodivergent writers trying not to “hurt people’s brains.”
Accessibility, Expression, and the Value of Struggle
- Supporters argue LLMs lower barriers for people with ideas but weak writing skills, language issues, or trouble with tone; they see this as improved communication, not replacement of thought.
- Critics say this hijacks “accessibility” language: writing skill comes from years of bad drafts, and LLMs short‑circuit that growth and deter people from ever really learning to write.
- Some frame it as a desire for the external validation of being seen as a good writer without the inner work; others reject the romanticization of “struggle” as gatekeeping.
Homogenization, “AI Slop,” and Authenticity
- Many describe a growing sameness in online text: LinkedIn posts, status updates, Medium articles, corporate emails, and even some HN submissions all feel like “blogging 101” or “social media manager” voice.
- People report instantly tuning out suspected AI text; suspicion alone makes them read everything more uncharitably, including genuine human writing.
- There’s concern that early, mediocre writing has always been necessary practice; if AI can already do “decent generic,” some may never push past that stage.
Algorithms, Engagement, and Regulation
- A large subthread argues that engagement-optimized feeds (ragebait, filter bubbles, sensationalism) are more corrosive than LLMs themselves.
- Some call for heavy regulation: bans or limits on personalized feeds, transparency on ranking factors, or mandated APIs so users can run their own filters (possibly LLM‑based).
- Others warn any regulator will be political and biased; attempts to outlaw “algorithms” risk sweeping up relatively benign systems like HN’s front page.
Coping Strategies and Retreats
- Many describe deleting or drastically limiting Facebook, Twitter/X, and LinkedIn, or using them only as static résumés / DM tools.
- Alternatives mentioned: Mastodon, Bluesky/atproto with user‑defined feeds, RSS, niche forums, “small web” blogs, and simple chronological or exclusion‑based filters.
- Some retreat to pre‑LLM books and older communities, seeing today’s internet as dominated by “AI slop” layered atop long‑standing “human slop.”
Longer‑Term Cultural Concerns
- Several worry that after a generation grows up reading and conversing with LLMs, humans will begin to think, argue, and justify themselves in LLM‑like patterns—polished, generic, and bullet‑pointed.
- Others note that standardization of language and style predates LLMs (dictionaries, grammar books, SEO, corporate tone); LLMs may just be the latest, cheaper amplifier of that trend.
- A recurring counterpoint: the real defense is critical thinking—evaluating ideas on their merits regardless of whether a human or a model produced the words.