Our approach to age prediction

Initial reactions & “creepy” factor

  • Several commenters say the framing feels invasive—like OpenAI is trying to infer their age personally, not just enforce a policy.
  • Some liken the vibe to “Minority Report” or general “creepy people doing creepy things” with behavioral prediction.

Privacy, surveillance, and biometric concerns

  • Strong resistance to selfie/ID verification via Persona; anecdotes of it being slow, intrusive, and demanding multiple documents.
  • Many see this as a “data grab” that will end up with downstream brokers, leaks, or later repurposing (e.g., for other products or border/security use).
  • Others note that every click, keystroke, mouse movement, and chat—typed or even deleted—is likely logged and can be linked to ID, making it highly deanonymizing.

Advertising and demographic profiling motives

  • Widespread belief this is primarily about building accurate demographic profiles (age, gender, income, etc.) to optimize ad targeting and comply with ad laws on minors.
  • Commenters tie this to OpenAI’s recent ad announcements and predict that misclassifying adults as minors conveniently pushes more users into “verification.”
  • Some argue that LLM conversations are essentially becoming the new ad-targeting corpus, like social media profiles.

Effectiveness and unintended consequences of age prediction

  • Skepticism that behavioral age prediction actually works; multiple users report being misclassified as teens despite being in their 30s–50s.
  • Concern that kids will simply “act adult” (ask mature questions) to evade filters, possibly increasing exposure to adult content.
  • Comparisons to crude legacy “age checks” (e.g., trivia questions in games) that were easily bypassed.

Child safety, responsibility, and censorship debates

  • A minority defends the effort, arguing ChatGPT is now de facto “safety-critical” after suicide-related cases and that companies must try to protect vulnerable users.
  • Others reject framing AI as safety‑critical, seeing this as overreach and creeping authoritarianism under “protect the children” rhetoric.
  • Debate over whether focusing on under‑18s makes sense when adults can be just as vulnerable or easily influenced.

Regulation, normalization, and trust

  • Discussion of laws pushing age‑gating globally and EFF’s warnings that such rules inherently drive more surveillance.
  • Some worry this normalizes “prove you’re an adult for full functionality” and identity‑linked internet use by 2030.
  • Several users say they will cancel or switch to competitors rather than provide biometric or ID data, citing a growing trust gap with OpenAI.