What is happening to writing? Cognitive debt, Claude Code, the space around AI
Mass taste, “slop,” and the shift from writing to “content”
- Several commenters argue that human-written “pure writing” is effectively over for mass audiences; what will matter is substance, not prose style.
- There is heavy pessimism about “the masses,” portrayed as preferring familiar, low-novelty “comfort food” ideas and being indifferent to whether something is AI- or human-generated.
- Others push back on vague use of “the masses” and frame the problem instead as an attention war driven by short‑form, addictive content.
What “content” means and how AI amplifies existing trends
- Some lament how “content” has shifted from meaning “substance” to meaning “format,” seeing modern “content” as largely empty, multi‑media packaging.
- Multiple commenters note that many problems blamed on AI (post‑truth, low‑effort writing, SEO sludge) predate it; AI simply accelerates and scales them.
- Axios-style compressed news and social media skimmability are seen as natural precursors to AI summarization everywhere.
Quality tiers: bad writing, great writing, and AI’s place
- Widespread agreement that most human writing was already bad; AI just made bad prose free and ubiquitous.
- Some think today’s models can handle low‑quality social media and filler journalism but not high‑end general‑audience work or canon‑level literature.
- Others argue that “competition” is now about volume and distribution: AI text crowds out originals in search results and platforms, regardless of quality.
- There’s debate over whether an AI could ever write something better than “War and Peace,” and whether infinite cheap “great novels” would devalue the form.
Art, programming, and what counts as irreplaceable thinking
- One thread contrasts writing as “a special, irreplaceable form of thinking” with coding; others insist software development also has style, taste, and hard‑won mental frameworks.
- Some artists object to AI in their own domain while quietly accepting it in others (e.g., fashion show with human-designed clothes but AI visuals and music).
- A recurring distinction: art as emotional effect (where origin doesn’t matter) vs art as relationship to a specific human creator (where origin matters a lot).
Cognitive debt, editorial fluency, and tools vs skills
- The article’s “cognitive debt” idea is reframed: the real debt comes from confusing editing with creating. Prompting and refining AI builds taste for judging text but not the underlying generative muscles.
- Analogies are drawn to calculators and CAD: tools can be net positive, but if students never first learn to think and write unaided, foundational skills may never form.
- Educators report large‑scale student reliance on LLMs for essays and are alarmed; some commenters welcome disruption if it leads to personalized AI tutoring, while others fear a future with fewer genuinely educated people and more easily steered populations.
AI slop, style recognition, and human preference
- Many describe a recognizable “LLM cadence”: choppy or over-smoothed, pseudo‑profound, and ultimately shallow. A parody of this style resonates strongly.
- Others demonstrate that with careful prompting, AI can produce more elegant, literary‑sounding prose, but critics say the deeper “soullessness” and incoherence over longer spans remain.
- Some predict a renewed appetite for concise, slightly rough, clearly human writing; skeptics note that models will likely learn to mimic that too, making legal/credential signals or no‑bot spaces increasingly important.
Authorship, authenticity, and personal stances
- A subset of commenters take a hard line: they refuse to attach their name to LLM-written work and will avoid sites that do; for them, text is fundamentally person‑to‑person communication.
- Others describe intensive, multi‑step collaboration with models (for research, synthesis, internal documents) that they see as genuinely enabling new work, even if the prose is generic.
- There is frustration that many “LLM‑powered breakthroughs” are asserted but rarely publicly demonstrable, leading to skepticism.
Education, inequality, and future culture
- Concerns arise that if LLMs can do many white‑collar tasks, elites may deprioritize broad, deep education, leaving most people with narrow vocational training plus AI tools.
- Some foresee “no‑bot‑allowed” enclaves (like certain chat communities today) as the only places where guaranteed human discourse—and thus trust—can persist.
- A darker thread worries about a cultural convergence where people themselves begin to talk and think like LLMs, making the “dead internet” hypothesis feel more plausible.
Cynicism toward cultural gatekeepers
- One late thread dismisses traditional literary and art canons as elitist, mocking celebrated writers and painters as low‑skill or fraudulent, then notes that such “snobs” are often the loudest critics of AI art.
- This expresses a broader resentment: if past taste‑making was arbitrary or status‑driven, it’s unclear why those same authorities should now define what is or isn’t “real art” in the age of AI.