Microsoft and OpenAI's close partnership shows signs of fraying
Economics, costs, and pricing
- OpenAI is reportedly on track to lose ~$5B/year; some worry this implies future price hikes or consolidation under Microsoft.
- Back‑of‑envelope math in the thread suggests ChatGPT subscriptions would need to roughly 3× in price to fully cover current burn, though others argue most of the loss is R&D, not inference.
- Several commenters distinguish marginal inference cost (likely profitable or close) from huge, recurring training and infrastructure costs that drive losses.
- Some expect per‑token costs to fall with better hardware; others note pressure to keep training ever‑larger models may invert typical “compute gets cheaper” dynamics.
Moats, competition, and business models
- Many see little durable moat beyond brand, early mover advantage, and deep Azure integration; Anthropic, Google, Meta, xAI, and open‑source (Llama) are viewed as increasingly close.
- Proposed moats: massive compute commitments, infra and tooling for large‑scale training/inference, proprietary high‑quality data, user logs and “digital twins,” and product polish.
- Others argue brand and distribution (3B+ monthly visits, “ChatGPT” as generic term) are a powerful moat, similar to Google vs. Bing—unless OpenAI “enshittifies” with ads or lock‑in.
- Skepticism that “API token vending” is a good standalone business against hyperscalers; subscription products and vertical apps may be where profit lies.
Impact on software work and tooling
- Strong disagreement on whether current LLMs can replace junior devs: some say they already can for many tasks; others say they fail badly in large, idiosyncratic codebases.
- Tools like Cursor and Claude 3.5 Sonnet are praised as huge productivity boosts for coding and debugging; others report they’re only good for boilerplate or trivial tasks.
- Concern about skill atrophy vs. advice to “extract value while it lasts” and keep enough non‑AI competence to avoid dependency risk.
Data, training, and “AI slop”
- One camp says access to large, high‑quality training data is the main moat; another (including people “in the space”) disputes that data is a bottleneck.
- Debate over “data pollution”: some think post‑2023 web content will be dominated by AI‑generated text, causing model collapse; others argue high‑quality sources (books, newspapers, curated corpora) remain abundant and can be filtered.
- Synthetic data and user‑interaction data (prompts, chats, RLHF) are discussed as future fuel for improved models, though some are skeptical of their value.
Microsoft–OpenAI relationship and governance
- Several see Microsoft strategically “embrace, extend, extinguish”: deeply integrating OpenAI while building its own stack, then potentially sidelining OpenAI once it has the IP and know‑how.
- The AGI clause in the Microsoft–OpenAI deal (different rights “pre‑AGI” vs. “AGI”) is viewed as a legal landmine: some joke OpenAI could declare AGI to escape, others note Microsoft might dispute any such claim.
- Trust and governance are recurring concerns: some say Altman/OpenAI have shown themselves untrustworthy (e.g., governance drama, side ventures) and bet this will hurt them long‑term; others counter that many powerful actors succeed despite dubious behavior.
Safety, AGI narratives, and societal risk
- Strong divide between those who think LLMs are just probabilistic text predictors far from AGI, and those who see emergent reasoning and long‑term risk (e.g., autonomous agents, terrorism, propaganda).
- Multiple commenters are more worried about near‑term harms: degradation of professional services, “AI accounting” without domain expertise, enshittified support, manipulation and propaganda, and concentration of power over data and interfaces.
- Definitions of AGI (e.g., “outperforms humans at most economically valuable work”) are criticized as vague and gameable, especially when tied to contracts and PR.