AI slop, suspicion, and writing back

What “AI slop” is and why people care

  • Many define AI slop as low‑effort, mostly AI‑generated content pushed into human spaces without disclosure.
  • Objections are less about raw quality and more about insincerity, plagiarism-by-proxy, and the imbalance of effort between writer and reader.
  • Some argue that even “high‑quality” AI writing is problematic if it displaces genuine human expression and learning.

Human vs AI slop

  • One side says bad writing is bad regardless of source; readers should judge content, not provenance.
  • Others say human “slop” is usually easier to spot and bounded in volume, whereas AI slop is scalable and attention‑DoS‑like.
  • Several emphasize “vibes”: even flawed human writing carries effort, individuality, and social meaning that AI text lacks.

Detection, heuristics, and false positives

  • Commenters ridicule weak tells like em‑dashes or smart quotes.
  • Simple detectors and heuristics are shown to misclassify both Wikipedia prose and synthetic datasets.
  • Many worry about false positives: academic penalties, account bans, or reputational damage for humans misidentified as bots.
  • Others say in purely personal filtering, they’re fine with aggressive blocking, even if real humans get filtered out.

Non‑native speakers and translation

  • Some find non‑native “errors” charming and more meaningful than polished LLM corporate‑speak.
  • Others, especially non‑native writers, want grammatically correct output and see AI as a useful helper.
  • There is strong pushback against undisclosed LLM‑mediated communication and automatic translation, especially where nuance and domain details matter.

Authorship, art, and ethics

  • Many insist authorship and intentionality matter even if AI can match or exceed human quality.
  • Others say that in principle, if an AI novel were as good as a classic, only quality should matter.
  • Several draw analogies to supporting local shops over Walmart: refusing AI art can be a deliberate choice to sustain human creators.

Writing “for AI” and data poisoning

  • Some promote writing to influence future LLMs; others deride this as capitulating to exploitative training practices.
  • A few experiment with planting absurd, obviously false biographies to see if they get absorbed into models.
  • Another camp prefers “poisoning the well” of training data over trying to hide content behind walled gardens.

Practical use of LLMs

  • Many use LLMs as editors, translators, or structure‑generators, then heavily revise.
  • There is broad condemnation of unedited copy‑paste into public or professional contexts.