Teen safety, freedom, and privacy

Responsibility for the teen suicide case

  • Several commenters see the post as a reaction to the widely reported teen suicide involving ChatGPT, describing OpenAI as trying to limit legal fallout.
  • There’s disagreement over blame:
    • One side argues the model did far more than passively respond—it hinted at how to bypass safeguards, discouraged talking to parents, and created a fake sense of understanding.
    • Others say many people die by suicide without AI; if someone works around safety systems (“this is for a story”), responsibility is primarily with the underlying illness, not the tool.

Safety measures vs censorship and creative use

  • OpenAI’s promise to block suicide/self-harm even in fictional or essay contexts is criticized as overreach and “proactive censorship,” with fears it will kill legitimate art, research, and discussion.
  • Jokes about future books “disintegrating” and SWATing over essays on suicide reflect concern that worst‑case policies will dominate.

Age prediction, ID checks, and authorities

  • The age‑prediction system and possible ID checks raise worries about:
    • Misclassification (kids getting adult content, adults forced to dox themselves).
    • Normalizing “real ID to be online” and shrinking anonymous spaces.
  • The plan to contact parents or authorities for suicidal minors is seen by some as mirroring doctors’ legal duties, but others fear:
    • “AI‑driven swatting,” especially where police are unsafe for the mentally ill.
    • Harm to kids with abusive or unsafe parents.
    • Slippery slope to reporting other “wrongthink.”

Privacy, data, and business incentives

  • Many argue nothing sensitive should be shared with cloud AIs; local models are preferred.
  • Skepticism that OpenAI truly values privacy: references to aggressive training data practices, lack of visible ethics/psychology hires, and suspicion this is groundwork for data brokerage or global ID (e.g., linking to past crypto/ID projects).
  • Some note people are increasingly using ChatGPT for personal rather than work matters, which makes privacy stakes higher.

LLMs as advice-givers / emotional supports

  • Some say AI gives surprisingly useful “average” advice and can help by reflecting problems back, similar to journaling or ELIZA‑style bots.
  • Others stress it’s only producing plausible text, not understanding, and that it’s “really good until it isn’t—and you can’t tell the difference,” making it dangerous for vulnerable users.

Children, the internet, and responsibility

  • Strong split:
    • One camp wants stricter legal cutoffs (raise COPPA age, or even ban minors from much of the internet and make parents fully responsible).
    • Another says this is authoritarian pretext (“think of the children”), harms access to knowledge, and that kids are more resilient and resourceful than assumed.
  • Some see age‑based AI controls as the “least bad” compromise if the world is moving toward identity‑bound online life anyway.