Google's Liquid Cooling

Existing Liquid Cooling & PUE Comparisons

  • Commenters note OVH and others have used water / immersion cooling for years, but OVH’s disclosed PUE (1.26) is seen as poor vs Google (1.09) and Meta (~1.08).
  • OVH’s immersion efforts appear more “labs” than broad production; their traditional water-loop setup looks less efficient than Google’s by PUE.
  • Some argue Google’s architecture (CDUs, facility loops) resembles decades-old mainframe and supercomputer designs, with more optimization than true novelty.

Series vs Parallel Cooling Debate

  • Long subthread argues whether putting chips in series vs parallel materially affects cooling.
  • Key points: water outlet temperature is dictated by total heat and flow, regardless of topology; series means later chips see warmer water and run slightly hotter.
  • Others stress practical engineering: you size flow rates to desired temperatures; often energy transfer, not layout, is limiting.
  • Consensus: with adequate flow, temp differences along the loop are small and layout is secondary to overall thermal design.

What’s Actually New (or Not) in Google’s Design

  • Many see the main shift as facility-scale: direct liquid loops from external chillers/CDUs into every rack, minimizing air interfaces and fan usage.
  • Critics counter that CDU-based, water-to-water architectures existed in mainframes since the 1960s; Google’s gain is in scale, integration, and PUE, not invention.
  • Direct-die cold plates, per-chip flow control valves, and dense TPU packing are highlighted as impressive fine-grained engineering.

Density, Economics, and Reliability

  • Drivers cited: growing chip TDP, tight interconnect requirements for ML clusters, and high data-center cooling power overhead.
  • Liquid cooling cuts fan power, enables higher rack power density, and shifts complexity from many tiny fans to a few big pumps/CDUs.
  • Leak testing, quick-disconnect couplers, and standardized maintenance procedures are emphasized as essential at scale; failures in other systems (e.g. leaking bags, spray incidents) are mentioned as cautionary tales.

Water Usage & Environmental Concerns

  • Discussion distinguishes closed liquid loops from facility-level evaporative cooling towers, where water actually evaporates.
  • Some see AI/data-center water use as overblown relative to national water consumption; others stress local water stress and poorly priced water rights.
  • Debate over whether “saving water everywhere” is meaningful; in wet regions it can even harm sewer systems, but in arid regions data-center draw is a real concern.

Waste Heat Reuse

  • Multiple examples mention using data-center heat for pools, district heating, or greenhouses; technically feasible but deployed only sparingly due to ROI and siting constraints.

Attitudes Toward Google & Hyperscalers

  • A number of comments express fatigue or hostility toward Google and other large platforms, seeing these cooling writeups as PR amid broader concerns about monopoly power and environmental impact.