It's rude to show AI output to people

Why AI output can feel rude

  • Many see pasting LLM text as the “LMGTFY” of the AI era: offloading thinking onto a machine and dumping the cognitive/verification cost on the recipient.
  • Users want human connection, side paths, and evidence of thought; AI prose feels generic, overlong, and emotionally flat.
  • There’s an asymmetry of effort: it’s now cheap to generate text but still expensive to read, verify, and respond. That’s perceived as disrespect for the reader’s time and attention.
  • Copy‑pasting AI in debates signals “argue with this machine, not me,” which some call dishonest and dehumanizing.

Impact on workplaces and collaboration

  • Common complaints: AI-written emails, chat messages, PRs, and specs that are verbose, incorrect, or unreviewed. Colleagues then must debug or fact‑check “slop.”
  • Reviewers resent AI‑generated code presented without testing or understanding; some close such PRs outright, or treat the author as less trustworthy.
  • People note AI can turn a short status into paragraphs that others then re‑summarize with their own AI—a pure compression/expansion loop.
  • Some report bosses or coworkers pasting LLM answers as gospel, or using AI to auto‑close support tickets, which feels especially insulting.

Trust, authorship, and identity

  • Several worry they can no longer know if words are genuinely someone’s; “proof‑of‑thought” in text is eroding.
  • Others note false positives: distinctive human styles get mis-labeled as AI, leading some to start actually using LLMs just to “sound more human.”
  • There’s anxiety about a future where “my AI talks to your AI,” with humans largely out of the loop.

Use cases defended

  • Some argue AI is just a tool: akin to using a copywriter, translator, or Wikipedia summary. What matters is correctness and usefulness, not origin.
  • Non‑native speakers and people with disabilities say LLMs are a major enabler, helping them write clear, professional messages.
  • A minority believes resistance is nostalgia akin to early complaints about email or photography; culture will adapt.

Etiquette proposals and coping strategies

  • Suggestions include: disclose when AI was used; only share outputs you’ve vetted and understand; send the prompt instead; or ask colleagues explicitly to write in their own words.
  • Others advocate shaming obvious slop (e.g., “Reply All” jokes), filtering or blocking chronic offenders, or using AI yourself to respond minimally.