Elon Musk wanted an OpenAI for-profit
OpenAI vs. Musk: Lawsuit, Profit Motive, and PR
- Many see OpenAI’s post (emails, timeline) as a PR move responding to Musk’s lawsuit and political influence, not a substantive legal defense.
- Releasing emails is viewed as “airing dirty laundry”: some appreciate the transparency; others call it unprofessional and irrelevant to contract or nonprofit law.
- Several commenters argue Musk’s prior support for a for‑profit structure may show hypocrisy but doesn’t settle whether OpenAI’s nonprofit → capped‑profit structure is legal.
Nonprofit Structure, Legality, and “Moral High Ground”
- Strong concern that a charity spawning a massively valuable for‑profit may violate “private inurement” rules; some argue a judge could “see through” the structure.
- Others note such parent‑nonprofit / for‑profit‑subsidiary structures are common and not inherently illegal.
- Quotes about “remaining a non‑profit” and having a “fiduciary duty to humanity” are now seen as ironic or dishonest given the later pivot to profit and massive capital raises.
- Thread repeatedly labels OpenAI’s “Open” and “safety” branding as bait‑and‑switch or regulatory‑capture theater.
Musk’s Behavior, Politics, and Space/Mars Ambitions
- Musk is portrayed by many as using political power (e.g., campaign spending, influence over regulation) to benefit his companies and harm competitors; others counter with examples like opening Tesla patents.
- His $80B “city on Mars” remark, self‑driving and AGI timelines, and Mars‑colonization economics are widely mocked as overoptimistic or fantastical, though some credit him with executing big visions (especially SpaceX).
AGI Hype, Timelines, and Millenarianism
- Early OpenAI predictions (robotics “completely solved” by ~2020, AGI in ≤10 years, adversarial examples “completely solved” in months) are seen as wildly overconfident and cult‑like.
- Some compare AGI talk to religious millenarianism or past tech bubbles (Segway, VR, NFTs, crypto); others insist current AI is qualitatively different and on a real path to AGI.
Product–Market Fit and Economics of LLMs
- One camp: ChatGPT has obvious product–market fit (hundreds of millions of users, top‑10 website, broad everyday use for search, coding, writing).
- Opposing camp: huge capex, weak unit economics, no single “killer app,” easy to copy; risk that local models and big‑tech competitors erode any moat and profits.
- Disagreement over whether current revenue growth offsets enormous training/inference costs and whether this is sustainable or bubble‑like.
Real‑World Utility vs. Limitations
- Many report major productivity gains in domains like coding, research, statistics, machining, legal search, and language learning.
- Others emphasize hallucinations, math/receipt errors, and lack of reliability for high‑stakes tasks; LLMs often require expert oversight and don’t yet replace skilled workers.
Societal Impact: Jobs, UBI, and Inequality
- Debate over a world where AGI labor is cheaper than human labor:
- Some expect increased productivity, lower prices, and more redistribution (negative income tax, robot taxes, dividends) to keep people afloat.
- Others foresee mass unemployment, higher mortality among the poor, and concentration of power and wealth in AI owners unless radical reforms happen.