Silicon Photonics Revolutionizes AI Hardware, Marvell Acquires Celestial AI for $3.25B Amid Industry Shift
December 29, 2025
Global implications include potential energy efficiency gains of 5x-10x in data-movement energy, disaggregated memory models, and geopolitical considerations over photonics manufacturing and export controls.
Near-term outlook foresees rollout of 1.6T and 3.2T optical networks through 2026–2027, with possible industry consolidation and a future where optical-first AI architectures unify processor and memory in a light-driven compute fabric.
Longer-term prospects envision silicon photonics expanding to edge AI devices and high-end workstations, contingent on advances like on-chip laser integration and monolithic optical chips to reduce cost and complexity.
Broadcom’s Tomahawk 6 switch with 200G-per-lane Co-Packaged Optics demonstrates integrated optical engines in switches, supporting 1.6T to 3.2T interconnects across data centers and reducing data movement energy.
Silicon photonics has reached commercial prominence in AI infrastructure, with Marvell agreeing to acquire Celestial AI for $3.25 billion and Broadcom reporting $20 billion in AI hardware revenue for 2025, signaling a shift away from copper interconnects.
The shift to silicon photonics addresses the AI memory wall, enabling scale-up architectures where racks share memory and data paths, reducing data movement bottlenecks and energy costs.
Overall, silicon photonics has hit a commercial tipping point, reshaping AI hardware by solving the memory wall and dramatically boosting data-movement efficiency, while regulatory and manufacturing challenges remain as the industry transitions to light-based interconnects.
Challenges include manufacturing precision at nanometer scales, standardization battles over optical protocols, and the ongoing need to integrate on-chip lasers for fully monolithic optical chips.
Co-Packaged Optics and Optical I/O chiplets enable edgeless I/O by integrating optical engines directly onto GPU or switch dies, dramatically reducing interconnect power per bit.
Industry-wide adoption is exemplified by Broadcom’s Tomahawk 6 102.4 Tbps CPO platform and Nvidia’s Rubin architecture with Spectrum-X, delivering substantial reductions in energy per bit.
Lightmatter’s Passage M1000 photonic interposer achieves 114 Tbps bandwidth, enabling thousands of accelerators to operate as a unified processor with near-zero latency.
Summary based on 3 sources
Get a daily email with more AI stories
Sources

Silicon Valley • Dec 18, 2025
The Optical Revolution: Silicon Photonics Shatters the AI Interconnect Bottleneck
FinancialContent • Dec 29, 2025
The Light Speed Revolution: Silicon Photonics Hits Commercial Prime as Marvell and Broadcom Reshape AI Infrastructure