The AI bubble today is bigger than the IT bubble in the 1990s
Similarity to / Difference from the Dot‑Com Bubble
- Many see strong echoes of 1999–2000: amazing underlying tech but widespread, unsustainable business models; lots of “AI as magic” justifying bad decisions and layoffs.
- Others insist it feels very different: valuations more grounded than 90s P/E extremes, major players with tens of billions in real ARR and fat cash flows, not Pets.com‑style shells.
- Several argue “bubble” is only knowable in hindsight; AI’s impact is uniquely hard to price because of multiple uncertain exponentials.
AI as Feature, Not Product
- Common view: most current generative AI is just a feature, not a standalone product. Whole sectors are shipping near-identical, mediocre tools.
- Mandates like “every feature must be AI-powered” are described as FOMO-driven, slowing delivery and producing worse solutions than simpler non‑AI approaches.
- AI chatbots slightly improve on old bots, but mainly by more efficiently obstructing access to humans; user experience often worsens.
Layoffs, Overstaffing, and Twitter/X
- Some claim CEOs cite AI as cover for layoffs when the real drivers are overstaffing, cheap‑money hangover, and cost pressures.
- Twitter/X is debated: proof you can fire 80% and “not collapse,” vs. proof you can shrink the business, worsen UX, and still keep servers online.
- Broad agreement that large firms could run with skeleton crews but at the cost of degraded quality, slow bug fixes, and weak innovation.
Real Utility vs Limits of LLMs
- Many use LLMs daily: better than Google/StackOverflow for small, verifiable coding questions, summarization, and glue tasks like entity extraction.
- Others report hard limits: nontrivial, niche, or complex tasks still fail repeatedly, even with careful prompting; you can’t fire your coders yet.
- Concern that LLMs work best with a few dominant languages (Python/TS), which could further entrench them and chill language/tool diversity.
Economics, Hardware, and Sustainability
- Skepticism that selling API inference is a long‑term moat: inference looks commoditizable; open models improve; usage appears heavily VC‑subsidized.
- Hardware is widely viewed as the safest layer: Nvidia framed as the “shovel seller” of this gold rush, with little serious competition so far.
- Some foresee many AI startups imploding “Pets.com‑style,” with a few giants emerging even stronger; others frame it as one frothy chapter in a broader “everything bubble.”