Greatest irony of the AI age: Humans hired to clean AI slop

Overall sentiment: mixed curiosity, skepticism, and fatigue

  • Commenters split between seeing current AI as an important but limited tool, and as overhyped tech producing low‑quality “slop” that others must clean up.
  • Several note that this “cleanup” work is not new: humans have long corrected outputs of earlier AI (OCR, speech recognition) and automated systems.

“AI slop” and the supposed new job category

  • Many question the “irony”: hiring humans to correct machine output is compared to factory workers removing or fixing defective items from a line.
  • Others argue the analogy fails: in manufacturing you make the same SKU repeatedly, with layered QA; AI outputs are one‑off, harder to validate, and bad runs can waste all the machine effort.
  • Some doubt there’s a real new profession of “AI slop cleaners,” suggesting it’s mostly hype or rebranding of existing developer/consultant work.

Impact on jobs, juniors, and wages

  • Several argue AI replaces or shrinks the bottom of the career ladder (interns/juniors) in fields like design, translation, copywriting and coding, while mid/senior roles remain.
  • Concern: if entry roles disappear, the talent pipeline collapses in a few years when no trained seniors exist.
  • Others counter with historic parallels (containers, plough, Model T, programming automation): some jobs vanish, but demand scales in new areas; the system re‑equilibrates.
  • One line of argument predicts developers will be rehired at lower pay (or offshore) to clean AI output; others respond that debugging and cleanup require more skill, so this may not scale as hoped.

Technical progress and “real AI”

  • Image generation text quality is seen as rapidly improving; some expect near‑perfect text in images within a few years.
  • Debate over whether current LLMs/ML are stepping stones to AGI or a dead end:
    • Critics: LLMs just predict plausible tokens, hallucinate confidently, show no genuine “understanding.”
    • Supporters: language was once an AGI benchmark; models can already structure fuzzy input, and future multi‑sensory, self‑modifying systems might emerge from this line.
  • Multiple comments note constant goalpost moving: whenever AI hits a milestone, it’s reclassified as “not real AI.”

Environment and resource use

  • Disagreement over energy/water impact:
    • Some cite low per‑inference GPU power and argue datacenters are a small fraction of global energy.
    • Others insist training costs, experimentation, network/device energy, and repeated generations must be included; accuse existing estimates of cherry‑picking and flawed assumptions.
  • Consensus only that current public numbers are incomplete or opaque.

Media quality, culture, and “slopocalypse”

  • Many see AI as flooding the web with generic, low‑effort content: porn, spam, scammy ads, shallow imagery and text.
  • Some frame AI output as “scaffolding” or “Lorem Ipsum for everything” that humans refine, especially in e‑commerce and ads where “ordinary” is good enough.
  • Concerns surface about degraded media culture, loss of craft, and a generation that might “do the work” via tools without truly learning underlying skills.