I'd rather read the prompt

Prompt vs Output and “House Style”

  • Many agree the “interesting” part is the prompt: the compressed, human decision-making and tradeoffs. The LLM output is often seen as decompressed fluff that adds little new content.
  • Commenters note a recognizable LLM “house style”: verbose, over-structured, generic positivity, bullet points with bold headers, em dashes, etc. Even when style is customized, readers increasingly suspect AI.
  • Several say they’d often rather read the raw notes, bullets, or prompt than the polished AI prose, because the latter feels like “cotton candy”: smooth, but low-information and inauthentic.

Education, Learning, and Cheating

  • Strong concern that students using LLMs for essays, assignments, or code are cheating themselves out of learning: writing and problem‑solving are meant to sharpen thinking, not just produce artifacts.
  • Others counter that many assignments are already pointless regurgitation; LLM use simply reveals how bad the assessment design is. If most students reach for AI, perhaps the assignment, not the student, is broken.
  • Suggested responses include: in‑person handwritten or oral exams, grading the process (recorded work, prompts, drafts) rather than just final text, and explicitly requiring prompts alongside outputs.
  • Many doubt long‑term detectability of AI use; any obvious “GPT-speak” will disappear as students learn to edit. Reliance on artifacts alone as proof of competence is seen as increasingly untenable.

Professional and Coding Uses

  • Some practitioners describe significant productivity gains: drafting legal documents, security reports, documentation, literature abstracts, or creative outlines, then revising by hand. They argue this is no different from using calculators or IDEs.
  • Critics argue that outsourcing synthesis to LLMs impedes “internalizing” knowledge and developing judgment; it may produce more polished deliverables but shallower professionals.
  • In coding, there is broad agreement that “vibe coding” (accepting large blobs of AI code without understanding) is dangerous. Acceptable uses are narrow: boilerplate, scaffolding, refactors, or explanations—provided the human fully reviews and owns the result.

Quality, Originality, and the Slop Economy

  • Many see LLMs as accelerants of existing trends: corporate waffle, marketing slop, SEO spam, and box‑ticking performance artifacts. AI makes it cheaper to produce “impressive-looking” but low‑value text, further degrading signals like essays, cover letters, and documentation.
  • Others emphasize constructive uses: as always‑available tutors, Socratic partners, translators, or summarizers for dense material. They see them as tools that can deepen learning when used to clarify, not replace, thought.
  • Underneath is a split worldview: one side prioritizes authenticity, craft, and thinking; the other prioritizes efficiency, deliverables, and “business value,” accepting that much human communication has long been mostly performative slop.