Microsoft Builds World's Largest AI Supercomputer in Wisconsin, Pioneering Next-Gen AI Infrastructure
September 18, 2025
Microsoft is constructing a massive AI supercomputer at its Wisconsin datacenter, designed to house hundreds of thousands of NVIDIA GPUs interconnected with high-bandwidth networks, delivering ten times the performance of current top supercomputers to support AI training and inference.
To manage the intense heat and density of this AI hardware, Microsoft employs advanced liquid cooling systems, including closed-loop water recirculation and large chillers, significantly reducing water usage.
Microsoft emphasizes that their integrated approach—spanning silicon, servers, networks, and datacenters—is crucial for enabling next-generation AI solutions, with the Wisconsin datacenter playing a central role in this strategic vision.
Supporting these AI datacenters is an extensive storage system capable of exabyte-scale capacity, featuring fast transaction capabilities and innovative data access technologies like BlobFuse2 to meet the demands of large AI training datasets.
Microsoft is expanding its AI infrastructure globally, developing additional datacenters in Norway and the UK, and planning to build Europe's largest supercomputer, highlighting their commitment to supporting AI workloads worldwide.
The company has announced major investments in purpose-built AI datacenters across the globe, including the new Fairwater AI datacenter in Wisconsin, which covers 315 acres and is the largest and most sophisticated to date.
These AI datacenters are interconnected via an AI WAN, forming a distributed, resilient supercomputer that enables scalable, geographically diverse AI training and deployment, moving beyond the limitations of single-facility setups.
Unlike traditional cloud datacenters, AI-specific facilities are purpose-built for large-scale AI training and deployment, featuring dedicated accelerators, high-speed interconnects, and massive storage systems.
The infrastructure includes tightly coupled clusters of NVIDIA GPUs, such as GB200 and GB300, interconnected through NVLink, NVSwitch, InfiniBand, and Ethernet, facilitating low-latency, high-bandwidth communication across racks and datacenters.
Summary based on 1 source
Get a daily email with more AI stories
Source

The Official Microsoft Blog • Sep 18, 2025
Inside the world’s most powerful AI datacenter - The Official Microsoft Blog