The AI vibe shift is upon us
MIT Report and Interpreting “95% Failure”
- Many see the 95% figure as confirming a gut feeling that most corporate AI projects are underwhelming, but are surprised it’s so high.
- Several commenters argue the framing is misleading: the study measures lack of rapid revenue impact, not necessarily technical or functional failure.
- Others note the report itself cites leadership issues, poor integration, and employees preferring personal LLM accounts over corporate tools.
Historical Parallels and “Dev-Elimination” Narratives
- Strong parallels are drawn to 4GLs/CASE tools, no‑code, and past AI waves that promised to let “unskilled people write programs” and eliminate developers, mostly failing beyond demos.
- SQL is cited as the rare partial success of this pattern: widely useful, somewhat accessible to non‑experts, but far from replacing programmers.
- Commenters remark that this “kill the devs” narrative recurs every decade, unlike for other professions like civil engineering.
Where AI Is Actually Useful (So Far)
- Consensus that LLMs are good for: small utilities, boilerplate code, low‑grade translation, spammy content, cheap stock‑image replacement, and answering “how do I do X in tool Y?” questions.
- Some developers and learners report genuinely transformative productivity and learning benefits; others see tools that still require strong human oversight and create technical debt.
- A minority note that a small fraction of companies do win big by picking a narrow pain point and executing well.
Economics, Labor, and Bubble Risk
- Many think valuations assume a paradigm shift (replacing workers, multi‑trillion markets) while reality looks more like “nice tool, tens‑of‑billions scale.”
- Inference costs and subsidized pricing are viewed as a looming constraint; some “game‑changing” workflows may not be economically sustainable.
- There’s anxiety about widespread job loss vs. the need for a new social/economic model, and skepticism that elites will accept such a shift.
Social and Information Impacts
- Commenters see LLMs as unquestionably “world‑changing” for scams, propaganda, and bots, undermining anti‑fraud and anti‑cheating systems and stressing democratic information ecosystems.
- Multiple people worry about AI as an “entropy machine”: if it displaces paid experts, high‑quality new content and training data may dry up, degrading future models and human knowledge.
Hype, Vibe Shift, and Markets
- Some think talk of an AI crash is media‑driven overreaction; others see a genuine “vibe shift” similar to the dot‑com comedown: tech remains real, but speculative capital and naive expectations get wiped out.
- There is frustration with overblown, quasi‑religious AI marketing (“AGI soon, might take your job and kill us”) compared to earlier, more incremental product pitches.
- Debate continues over whether big winners (e.g., GPU and ad giants) reflect sustainable AI value or just hype‑driven capital flows.