NVIDIA Leads AI Hardware Race with A16 Capacity for 2027 Feynman GPUs, Amidst Industry's Cooling Challenges
December 23, 2025
Analysts view A16 as a watershed architectural shift since FinFET, with implications for industry bifurcation driven by wafer costs and cooling requirements.
A16 represents an evolution of TSMC’s 2nm (N2) family using second-generation Gate-All-Around nanosheet transistors, with SPR as a distinct backside power delivery approach that differs from Intel’s PowerVia.
Disclaimer: analysis-focused notes are provided by TokenRing AI to contextualize current AI developments.
The AI hardware race is accelerating as NVIDIA reportedly secures dominant A16 capacity for its forthcoming Feynman GPUs in 2027, with OpenAI and Broadcom collaborating to design in-house AI inference chips on the A16 node, and AMD eyeing A16 for the MI400 series, while Intel doubles down on High-NA EUV lithography for its 14A node.
AI datacenters are pushing toward leading-edge nodes, with NVIDIA’s A16 capacity allocation enabling trillion-parameter model training and the use of short-reach SPR architecture to tackle routing and power delivery needs.
OpenAI’s move toward vertical integration is underscored by its partnership with Broadcom to develop its first in-house inference chips on the A16 node, with AMD also adopting A16 for its MI400 lineup and Intel pursuing lithography-focused strategies for convergence around the 14A node.
The Super Power Rail (SPR) technology rewrites the power delivery by moving it to the wafer backside, freeing the front side for signal routing and enabling higher logic density, while delivering modest speed boosts and notable power reductions at the same clock.
SPR enables a potential 8-10% speed increase and 15-20% power reduction at unchanged clock speeds, with improved front-side routing density by up to about 1.1x.
Thermal management challenges from backside power delivery may accelerate adoption of liquid cooling and immersion cooling in data centers to handle the higher compute density.
Buried hot spots under metal layers exacerbate cooling needs, likely driving faster uptake of next-gen cooling methods in AI data centers.
Backside power delivery could unlock roughly 20% more compute within the same energy envelope, addressing the AI power wall and potentially accelerating sovereign AI ambitions and data-center innovations.
However, the move comes with cost pressures—roughly $50,000 per A16 wafer—that could widen gaps between hyperscalers and smaller startups as power-efficient AI accelerators scale.
Summary based on 2 sources
Get a daily email with more AI stories
Sources

FinancialContent • Dec 23, 2025
The Silicon Frontier: TSMCâs A16 and Super Power Rail Redefine the AI Chip Race