Modern-Day Oracles or Bullshit Machines? How to thrive in a ChatGPT world

Course Goals & Framing

  • Course is positioned as humanities, not CS: about “what it means to be human” with ubiquitous LLMs, and how to live/thrive alongside them.
  • Focus is on when and why-not-to use AI, not “prompt engineering” or implementation details.
  • Several commenters see it as exactly the kind of media‑ and info‑literacy course schools and universities have lacked (alongside “digital self‑defense”, data security, social media skepticism, etc.).
  • Instructors stress a dialectical approach: students already use LLMs and feel both enthusiasm and anxiety; the course helps them reason through benefits/harms, not just accept a party line.

“Bullshit Machines” vs Oracles

  • “Bullshit” is used in the Frankfurtian sense: language intended to sound authoritative or persuasive without regard to truth, contrasted with lying (where the liar knows and inverts the truth).
  • Supporters say this perfectly captures LLM behavior: they generate fluent, confident prose without any built‑in truth criterion and rarely say “I don’t know”.
  • Critics argue the term is emotionally loaded, anthropomorphizes “intent”, and downplays that model builders do try to optimize for correctness, not flattery.
  • Some suggest softer metaphors (“waffle machines”, “autocomplete on steroids”) or worry the title makes the material politically unusable in AI‑enthusiastic workplaces.

What LLMs Are (and Aren’t)

  • Many commenters endorse the “fancy autocomplete” / next‑token‑prediction explanation, with the key caveat that training on massive human text gives surprisingly rich capabilities.
  • Long debate over whether that implies a “model of reality” or merely word‑co‑occurrence; some say statistical text encodes consensus knowledge, others emphasize lack of grounding or empirical contact.
  • “Hallucination” is criticized as a marketing euphemism: when models fabricate, they are functioning as designed—guessing and sounding confident—rather than “malfunctioning”.
  • Reasoning is contested:
    • Pro‑reasoning side cites chain‑of‑thought models, logic puzzles, code synthesis, and novel‑seeming problem solving as evidence of at least limited reasoning.
    • Skeptical side counters with systematic failures on simple arithmetic, value comparison, Sudoku, and CoT that retrofits bogus explanations—arguing this is convincing mimicry, not robust logic.

Practical Use, Misuse, and Risk

  • Multiple anecdotes of professionals pasting unverified LLM text into reports, policy memos, legal and planning advice, and internal docs; older and younger workers alike are doing this.
  • Developers report real productivity gains for code boilerplate, refactors, and mundane writing—but only with close human review; models are compared to “a loud, pushy intern” or “fancy autocomplete”, not a true copilot.
  • Widespread concern about information quality:
    • Scams and phishing become more scalable and personalized.
    • The web, search results, and even citations are being flooded with plausible but inaccurate or fabricated content.
    • Trust in online sources and even in traditional institutions (news, academia, Wikipedia) is seen as increasingly fragile.
  • Some see AI as adding to a longer arc: from Wikipedia misuse to social‑media misinformation to today’s frictionless BS generation.

Pedagogy, Design, and Audience

  • Many praise the course as clear, accessible, and well‑sourced, with useful case studies and principles rather than rigid rules.
  • Others criticize the “scrollytelling” site: jerky animations, scrolljacking, and poor accessibility on Firefox/iOS; repeated calls for a plain‑text or PDF version.
  • Authors say 18–20‑year‑olds preferred this style but accept they need a parallel, simpler layout for other readers.
  • Several educators plan to use it with undergrads and even medical students; some request more explicit treatment of scientific/technical writing and exercises for practicing BS‑detection.

Broader Reflections & Disagreements

  • Thread contains deep philosophical disagreement about:
    • Whether humans themselves mostly operate on “consensus reality” and persuasion, making the human/LLM gap smaller than people like to admit.
    • Whether rapid capability progress will soon overturn current limitations, making categorical claims (“they can’t reason”, “they have no ground truth”) risky.
    • How much the core problem is AI itself versus long‑standing human tendencies toward credulity, hype, and outsourcing thinking.
  • Despite sharp disagreements over capabilities and terminology, there is broad convergence on one point: people badly need better critical‑thinking habits and epistemic hygiene in a world where convincing text is cheap and ubiquitous.