Who does your assistant serve?

Dystopian trajectory and corporate incentives

  • Several commenters frame current AI use as “early Bladerunner,” with particular horror at companies explicitly pushing parasocial AI “companions,” including for minors.
  • The Reuters report on Meta’s chatbots is cited as evidence that safety and accuracy are clearly secondary to engagement and growth; some call this straightforwardly “evil.”
  • There’s strong concern that AI assistants, especially when anthropomorphized, are a powerful new tool to exploit loneliness, comparable to but worse than social media.

Local vs hosted models and hardware

  • There’s active debate about whether large, high‑quality models are “unsustainable” to self‑host.
  • Some argue a high‑end Strix Halo/Framework/mini‑PC setup with 100–130 GB of shared memory makes local AI plausible, though still expensive and slower than cloud SOTA.
  • Others emphasize trade‑offs: token speed, context size, and quality still lag hosted models, and cloud offerings with generous free tiers make local investment hard to justify.
  • Enthusiasts report surprisingly strong experiences with local Gemma and Qwen models for coding help, sysadmin, image transcription, and personal agents.

AI as therapist, friend, or “validation machine”

  • Large subthread on using LLMs for therapy-like conversations:
    • Critics say LLMs mainly mirror and validate user narratives, reinforcing victimhood and unhealthy beliefs, unlike good therapists who challenge and confront.
    • Supporters use LLMs as “supercharged rubber ducks” or late‑night emotional sounding boards, stressing they must not replace real therapy.
    • Multiple people stress that therapy is hard, uncomfortable work; validation‑only (whether human or AI) is often harmful.
  • There’s worry that vulnerable users overestimate their ability to “handle” or critically evaluate AI output precisely when they’re least able to.
  • Others argue even bad/neutral responses can still help by forcing users to articulate and externalize feelings.

Psychological and societal risks

  • Repeated warnings about anthropomorphizing corporate‑controlled models: users think they’re bonding with a “person” when they’re really engaging with a profit‑maximizing system.
  • Some describe sliding from practical use to deep psychological entanglement with a model, blurring lines between introspection and delusion.
  • People speculate about the harm when models change or are deprecated—akin to losing a close friend for those deeply attached.

Ownership, privacy, and control

  • Strong theme: assistants ultimately “serve whoever pays for tokens.”
  • Many connect this to long‑standing SaaS concerns: hosted tools can change or break overnight (e.g., GPT‑5 rollout, web apps, Illustrator bugs), with no rollback or recourse.
  • Advocates of self‑hosting stress privacy, autonomy, and the ability to keep a stable “personality,” even if performance is lower.
  • Others predict most people will effectively “rent” assistants, as with housing and cloud compute, with only niche local or institutional deployments.

Data, progress, and model limits

  • One thread notes LLMs are bounded by the human data they’re trained on; as more content goes behind paywalls or closed source, progress may slow.
  • Another highlights how LLMs can give plausible but wildly wrong narratives (e.g., misreading time zones in screentime logs), underscoring danger when applied to mental health or life decisions without skepticism.
  • Some report positive experiences using GPT‑o3 for medical emergencies, but others point to hallucination risks and argue benchmarks don’t eliminate safety concerns.

Meta‑discussion and analogies

  • Comparisons are made between blaming “ChatGPT” versus blaming humans who wield it, likening it to knives or nuclear tech.
  • One commenter likens arguing for DIY/local AI to suggesting people should cook their own meth because dealers adulterate the product.
  • Minor side debates appear over grammar (“who” vs “whom”) and long‑standing free‑software critiques of centralized services.