Datacenters in space aren't going to work

Role of sci‑fi and hype

  • Many see “datacenters in space” as shallow sci‑fi cargo culting: latching onto space aesthetics while ignoring the cautionary, societal focus of real speculative fiction.
  • Several comments frame the idea as investor/PR narrative rather than serious engineering: something to reassure AI/infra investors and distract from terrestrial siting, regulation, and NIMBY issues.

Thermal management and “vacuum cooling”

  • Core consensus: cooling is vastly harder in space. No air or water means essentially no convection; only radiation to deep space is available.
  • Vacuum is an excellent insulator (thermos analogy). To dump multi‑MW of heat, you need gigantic radiators—football‑field to square‑kilometer scale for modern DC loads.
  • Moving heat from chips to those radiators requires complex multi‑stage liquid loops and pumps; any leak or failure is catastrophic and hard to service.
  • A minority argue that with very hot radiators, better coatings, and huge structures, it’s “just engineering,” but even they concede it’s difficult and expensive.

Radiation and electronics reliability

  • Space datacenters would face high rates of single‑event upsets even in LEO, aggravated in regions like the South Atlantic Anomaly.
  • True rad‑hard CPUs/GPUs exist but are generations behind and extremely expensive; triple‑modular redundancy further slashes effective performance.
  • Some note ML inference is numerically tolerant to bitflips, but for large, precise workloads the reliability penalty is severe.

Economics, scale, and maintenance

  • Launch costs, station‑keeping, gigantic radiators, shielding, and ground stations make per‑MW cost orders of magnitude above terrestrial DCs, even assuming Starship‑level prices.
  • GPU lifetimes (~5 years) clash with “launch once, leave it there” dreams; maintenance missions are prohibitively expensive, and fail‑in‑place designs waste enormous capital.
  • Comparisons to Microsoft’s underwater project: cooling “worked,” but logistics and maintenance killed scalability; space would inherit those problems plus worse cooling and radiation.

Latency, bandwidth, and realistic use cases

  • Space links are tiny compared to intra‑DC fiber; Starlink‑class bandwidth/latency is hopeless for large training clusters that depend on ultra‑fast interconnects.
  • More plausible niche: processing space‑originating data in orbit (imaging, surveillance, autonomous spacecraft), where local compute reduces downlink needs.

Alternative locations (ocean, poles, Moon, asteroids)

  • Underwater, Arctic/Antarctic, rural, and bunker DCs are repeatedly cited as far more practical ways to get cheap cooling, isolation, or security.
  • Moon/asteroid concepts face similar radiation and worse thermal issues; lunar regolith is an insulator, not an effective heatsink.

Security, jurisdiction, and dual use

  • Some speculate about evading nation‑states or enabling resilient crypto/“sovereign” infra in orbit; others point out space assets are traceable, treaty‑bound, and trivially targetable by ASAT weapons.
  • More credible “dual use” story: on‑orbit compute for military sensing, tracking, and battle‑management—though that still doesn’t justify general AI datacenters in orbit.

Environmental and solar‑power arguments

  • Space solar gets more consistent, stronger insolation, but critics stress you still must radiate the same energy away; the thermal problem dominates.
  • Climate impact of frequent launches is flagged as unclear but potentially serious; relying on rockets to “green” AI compute is viewed skeptically.

Optimism vs. “fundamentally dumb”

  • A small camp argues “hard ≠ impossible” and that billionaires funding R&D can advance space thermal tech and on‑orbit compute for other missions.
  • The dominant view: this isn’t merely difficult, it’s structurally worse than ground DCs on every important axis—cooling, cost, bandwidth, maintenance, and legal risk—so the idea is, for now, fundamentally uneconomic and mostly marketing.