It's insulting to read AI-generated blog posts

Detecting “AI Voice” and Reader Reactions

  • Many commenters say they instantly hit “back” when prose “feels like AI”: over-explained, repetitive, bloated, generic tone, emojis, bulletized clichés.
  • Others use a softer filter: “is this interesting or useful?”—if not, they leave regardless of whether it’s AI or just bad writing.
  • Some are disturbed that colleagues can’t tell AI-generated text apart from human, or don’t think the distinction matters.
  • Several note that AI often has good local coherence but loses the thread across longer texts; they claim they could reliably spot an “AI book.”

Authenticity, Intention, and Human Connection

  • A large faction values knowing a human actually struggled to express something: intention, effort, and personal voice are seen as core to writing.
  • Analogies to music and painting recur: a flawed live performance or amateur painting is preferred over a technically perfect but soulless machine rendition.
  • Others counter: if the text is clear and useful, they don’t care how it was produced; they judge message, not messenger.

“Slop”, SEO, and Trust Erosion

  • Widespread frustration with AI-slop: unreadable AI blogs, READMEs, PRDs, LinkedIn posts and news articles filled with generic filler, cutesy emojis, and no real insight.
  • Many assume such pieces exist primarily for SEO, ad impressions, or personal-brand farming, not to help readers.
  • Some respond by blacklisting domains, unsubscribing, or filtering in search tools; trust in random web content is dropping.

AI as Tool vs. Abdication of Responsibility

  • Narrow use (spellcheck, grammar touchups, translation, outline critique) is seen by many as legitimate—especially for non-native speakers or neurodivergent people.
  • Others argue even that can smuggle in “AI voice” and erase authenticity; they prefer janky but clearly human language.
  • Strong pushback against AI-written pull requests, documentation, and project proposals: reviewers feel it’s insulting to expend serious effort on code or text the author didn’t themselves understand or own.
  • A common norm emerges: AI assistance is acceptable if the human deeply reviews, edits, and takes responsibility; dumping unedited AI output on others is not.

Learning, Growth, and “Being Human”

  • Some endorse the essay’s “make mistakes, feel embarrassed, learn” ethos: over-automation of communication is seen as stunting growth and flattening individuality.
  • Others reject this as romanticized Luddism, likening AI to calculators, spellcheck, or typewriters: tools that free time and cognitive load for higher-level thinking.