Lack of intent is what makes reading LLM-generated text exhausting

Experience of Reading LLM-Generated Text

  • Many commenters resonate with the author’s frustration: LLM-written documents feel bloated, meandering, and hard to follow, turning readers into “proofreaders” and “editors” against their will.
  • LLM prose is compared to bad student essays and mid-tier corporate boilerplate: grammatically correct, “flowing,” but vacuous or confusing.
  • Some liken it to texts that put you to sleep: words are recognizable, but there’s little signal, surprise, or structure to hold attention.

Human Intent and the Social Contract of Writing

  • A central theme is “intent”: readers expect a human mind to have cared about what’s being communicated.
  • Several argue that when a human can’t be bothered to write, it’s offensive to ask another human to read AI output; it feels like a breach of trust and a violation of an implicit social contract.
  • Others counter that perceived intent is in the eye of the reader; if readers interpret intent, that may be enough functionally, even if the source is a machine.

Automation, Work, and Human Worth

  • The line “no human is so worthless as to be replaceable with a machine” triggers debate.
  • One side sees offloading manual tasks as good, but replacing thinking, voice, and relationships as harmful to the human experience.
  • Critics argue this is inconsistent: society already accepts machines replacing physical labor; why draw a moral line at intellectual or creative work?

Where LLMs Are Seen as Legitimately Useful

  • Widely praised uses:
    • Editing for clarity, tone, brevity, and grammar while keeping human-authored core content.
    • Translation and exploring foreign languages.
    • Research assistance and citation discovery (with verification).
    • Generating boilerplate and documentation that no one will deeply read.
  • Several emphasize a “cyborg” model: tools that extend human judgment, not replace it.

Quality, Hallucinations, and Slop

  • Commenters note fabricated or misattributed citations creeping into papers and documentation.
  • A recurring idea: if your prompt has little real content and the output is long, the extra text is almost pure “AI slop” being pushed onto others.
  • Some predict norms will evolve: LLMs should shorten and distill, not pad and obscure.