Getting your news
Attempting to reconnect
Finding the latest in Climate
Hang in there while we load your news feed
Liquid cooling challenges for AI data center scaling
Steven Carlini (article originally published in Forbes and republished on Schneider Electric’s blog) states that liquid cooling is becoming essential infrastructure for AI data centers and outlines technical, operational, and vendor-selection requirements.
- Confirmed facts: GPU power and heat now exceed conventional CPU servers by ~20x (power and heat); latest NVIDIA-based GPU servers require 132 kW per rack, with the next generation expected in under a year requiring 240 kW per rack; supplemental air cooling still accounts for 20%–30% of total thermal load; servers now come standard with input/output piping for liquid coolant.
- Recommendations and implementation details: Data center cooling requires two separate cooling loops (IT room and heat rejection) with Cooling Distribution Units (CDUs) as interfaces; vendors should provide manifolds, piping, CDUs, chillers, pumps, cabinets, integrated controls, warranties, and GPU certifications; designs should use digital twin modeling and lab testing, include CDU redundancy (dual pumps and power supplies), uninterruptible power for CDUs, and leak detection in white space. These are presented as technical recommendations and operational requirements rather than speculative outcomes.