Are we repeating the telecoms crash with AI datacenters?

Article reception & authorship debate

  • Several commenters strongly suspect the piece is LLM-generated due to its style (rule-of-three, repetitive phrasing, mixed US/UK spelling), others argue it’s pointless or unreliable to “detect AI” in writing.
  • The author appears in the thread and attributes spelling inconsistency to personal habit, which some find plausible and others still doubt.
  • Stylistically, some find it “LLM slop” that undermines credibility; others think it’s a useful, comprehensive overview even if imperfect.

Parallels and contrasts with the telecom crash

  • A central point debated is whether AI datacenter overbuild resembles 1990s dark-fiber overbuild.
  • One camp agrees with the article’s claim that overcapacity would be absorbed, but others note telecom capacity was also ultimately absorbed—after massive bankruptcies.
  • Key structural difference highlighted: in telco, fiber is a long-lived linear asset; in AI, the expensive part is short-lived GPUs. Fiber can be sweated for decades; old GPUs may become uneconomic quickly if new generations are vastly more efficient.

Utilization, pricing, and demand uncertainty

  • Many argue “overutilization” just means services are underpriced, driven by free tiers and VC-subsidized loss-leader strategies; real demand at sustainable prices is unclear.
  • Others counter that enterprise and “agentic” or background uses (millions of tokens per worker per day, automated customer service, deep tooling integrations) could easily justify massive token consumption.
  • Skeptics point out current AI vendors are unprofitable, and you “can’t make it up in volume” if every token is a loss.

Hardware lifecycle, reuse, and consumer upside

  • Disagreement on whether a crash would benefit hobbyists: some note datacenter GPUs aren’t consumer-friendly, are often destroyed for security/tax reasons, and may be more valuable repurposed internally.
  • Others emphasize that GPUs age “gracefully” and can be ganged together, so there may be less of a glut than in fiber, and less cheap surplus for the public.
  • Several stress that buildings, power feeds, and cooling outlive the GPUs, but represent a small fraction of total capex.

Local models, moats, and competition

  • Debate over whether efficiency gains or new algorithms could shift workloads back to phones/PCs, undercutting cloud ROI; some see this as a major unpriced risk.
  • Many are skeptical there will be a single “default” AI provider: switching costs are low, models feel interchangeable, free tiers abound, and moats based on history/memory or feedback loops are questioned.
  • Others argue data, user memory, and integration into workflows could create sticky moats and support winner-take-most outcomes.

Energy, infrastructure, and systemic risk

  • Several commenters argue the piece underplays electricity constraints and environmental externalities; if overbuilt GPUs are run flat out, power and CO₂ costs burden everyone else.
  • Some liken current AI capex to a Manhattan Project–scale national bet on AGI, driven more by fear of missing AGI than by clear ROI models.