A Remarkable Assertion from A16Z

AI authorship of the A16Z reading list blurb

  • Many commenters see the “literally stop mid-sentence” claim as classic LLM hallucination: confidently specific, trivially false, and stylistically “AI-slop.”
  • Others propose human error (misremembering endings, conflating with other mid-sentence-ending novels) but are seen as less plausible by most.
  • GitHub history is cited: descriptions were generated in Cursor/Opus (“opus descriptions in cursor, raw”), with explicit “AI GENERATED NEED TO EDIT” notes, then lightly human-edited.

How the Stephenson description evolved

  • An earlier AI draft compared his endings to a “segfault,” which at least had the right type of exaggeration.
  • A later commit changed it to “literally stop mid-sentence,” and introduced a misspelling of his name; this suggests human post-editing of AI text, not pure machine output.
  • Debate: AI only wrote blurbs vs. AI also helped choose books. Consensus in-thread: list likely human-chosen but machine-described, which undercuts the list’s claimed authority.

Debate over “literally”

  • One camp: “literally” is now widely used as an intensifier, not meant literally; that’s likely what the editor intended.
  • Counterpoint: even as an intensifier it’s misleading here, because the statement is a concrete, checkable claim about text and simply false.
  • Linguistic side-notes: historical use of “literally” as intensifier since the 18th century; concern that losing a precise word for non-metaphorical truth is a kind of drift toward “Newspeak.”

Are his endings actually bad?

  • Several readers say his endings are perfectly normal, no more abrupt than Shakespeare or Frank Herbert; the mid-sentence claim is pure fabrication.
  • Others report a consistent pattern: gripping first 80% then endings that feel rushed, bloated, or mistimed, especially in later novels.
  • Comparisons are made to genuinely mid-sentence endings (e.g., certain postmodern and unfinished works) to emphasize how different that is from his books.

“Inhuman Centipede” and broader LLM criticism

  • The article’s “Inhuman Centipede” metaphor for models training on their own slop resonates; commenters trace similar prior uses and link it to a feared self-reinforcing garbage loop.
  • The incident is treated as emblematic of a venture firm and broader Silicon Valley culture: shallow literary engagement, AI-generated PR, and “nerd shibboleths” to signal taste rather than genuine reading.
  • Multiple personal anecdotes highlight how often LLMs fail on practical tasks, reinforcing skepticism about using them for authoritative recommendations.