The surprise deprecation of GPT-4o for ChatGPT consumers

Model Removal and Rollout Confusion

  • Many users were surprised that GPT‑4o and o3 disappeared from the consumer UI as GPT‑5 rolled out, with inconsistent availability across web, desktop, and mobile.
  • Several people later reported that 4o reappeared or could be re‑enabled (e.g. via “legacy models” toggles or plan differences), and that OpenAI reversed course after backlash.
  • There is widespread confusion between “deprecated”, “shut down”, and what’s still available via API vs in the ChatGPT UI.

GPT‑5 vs 4o/4.5/o3: Quality and Behavior

  • Some find GPT‑5 great for “curt, targeted answers,” coding, and reasoning, praising it as a major jump and appreciating reduced fluff.
  • Others say it’s worse than 4.5 or o3: more misunderstandings, shorter or less detailed outputs, weaker long‑context tracking, and poorer for creative/worldbuilding or research workflows.
  • Multiple people saw GPT‑5 Thinking as less capable than o3 on reasoning tasks, or overly literal and “timid” in agentic flows.
  • API users note GPT‑5 is a distinct model family; in ChatGPT it’s also a router over multiple models, and early routing bugs likely made it look “dumber”.

Economics, Capacity, and Product Strategy

  • Many infer the change is primarily cost‑driven: fewer active large models simplifies GPU capacity and pushes users to cheaper‑to‑run architectures.
  • Some argue OpenAI should have kept older models as paid “LTS” options rather than abruptly hiding them, especially for workflows heavily tuned to specific models.
  • Others note that from a pure margin perspective, consolidating users on fewer, cheaper models is rational even if it angers a vocal minority.

Casual Use, Parasocial Attachment, and Mental Health

  • A major thread is shock at how many users treated 4o as a friend, therapist, or romantic partner; subreddits like “MyBoyfriendIsAI” are widely described as disturbing.
  • Concerns include AI‑induced psychosis, reinforcement of delusions, and replacement of human relationships with sycophantic chatbots.
  • Some defend “AI companionship” as analogous to pets or parasocial fandom, or as better than nothing for lonely people; others insist current LLMs are inherently unsafe as therapists or confidants.

Stability, Trust, and Alternatives

  • Many see this as another reminder that closed, centralized models are “shifting sand” and unsuitable as critical infrastructure or long‑term creative partners.
  • Several point to open‑weight and local models (and cheap rented GPUs) as the only way to guarantee personality and behavior don’t change under them.
  • Broader frustration appears about AI hype dominating discourse and about product decisions driven by opaque CEOs and marketing rather than user needs.