I know you didn't write this

Reliability of AI “tells” (document history, style, formatting)

  • Many argue the author overconfidently inferred “definitely AI” from a single bulk paste + no edit history; lots of people draft in local editors (vim, Emacs, Obsidian, Notes, Org-mode, markdown) then paste into Docs.
  • Tables, headings, and styling can also come over via paste, so a “wham, full document” history isn’t dispositive.
  • Others note that a sudden 5k-word, perfectly formatted doc from someone normally terse is itself suspicious, but still not proof.

Verification burden and effort asymmetry

  • Core complaint: AI lets people cheaply generate long, plausible plans whose correctness is expensive for others to verify.
  • This shifts work from the “prompter” to reviewers/implementers; any time saved by prompting is consumed by verification overhead.
  • AI enables people to be “wrong faster,” potentially flooding teams with slop and forcing repeated reviews after superficial fixes.

Trust, social contract, and feelings of betrayal

  • Several commenters say the hurt is about broken expectations: you thought a colleague did the thinking, but they actually outsourced it.
  • Before AI, a well-written, polished doc functioned as “proof-of-work” that the author had thought things through; that heuristic no longer holds.
  • Some compare undisclosed AI use to re-serving someone else’s leftovers at a restaurant: even if it tastes fine, it feels deceptive.

Judging output on its merits vs its origin

  • One camp: tools don’t matter; work should be judged on clarity, correctness, and utility. A bad document is bad regardless of whether a human or AI wrote it.
  • Opposing view: who generated the ideas matters, because you can’t infer how much real thought went in, and you may need the author’s own understanding later.

Context-dependent acceptability

  • Many see AI as fine or beneficial for low-stakes, bureaucratic, or obviously-perfunctory work (grant boilerplate, unread 30-page reports, translation/grammar help).
  • Others insist on human-authored content for sensitive or high-trust domains: performance reviews, technical design reasoning, security reviews, nuanced feedback.

Etiquette and disclosure

  • Several want norms: mark AI-assisted text, include prompts, or at least explicitly say “generated by AI, reviewed and edited by me.”
  • Others find disclaimers awkward and prefer simply holding people fully responsible: if you send it, you own and defend it.