Neurophos Secures $110M to Revolutionize AI with Energy-Efficient Photonic Processors
January 22, 2026
Industry context notes AI-driven data centers seek greater efficiency and scalability amid rising energy costs and limited power, making photonic compute a potentially disruptive alternative to traditional GPUs.
Broader industry pressures—rising data center energy costs, GPU shortages, and slowing Moore’s Law—drive exploration of alternative compute architectures like photonic processing.
Neurophos plans a datacenter-ready OPU module, a full software stack, early-access hardware, and expanded operations in San Francisco and Austin, while Microsoft is reportedly evaluating integration with Azure AI infrastructure.
The funding round was led by Gates Frontier, with participation from M12, Carbon Direct Capital, Aramco Ventures, Bosch Ventures, Tectonic Ventures, Space Capital, and others, underscoring strong backing from both hyperscalers and industrial investors.
The core technology uses metamaterials-derived metasurface modulators to enable miniaturized optical components that perform matrix-vector multiplications with light, aiming to cut energy consumption and boost speed.
Neurophos, a photonics startup based in Austin, has secured a $110 million Series A to commercialize optical processors (OPUs) built from metasurface modulators derived from metamaterials research, with the goal of dramatically reducing energy use in AI inference.
Potential applications span large language model inference, computer vision for autonomous systems and surveillance, scientific computing with heavy matrix ops, and edge AI where power is constrained.
Neurophos claims its technology is compatible with standard silicon foundries, potentially easing supply-chain integration and cost curves versus bespoke photonics approaches.
If successful, photonic compute could become a practical accelerator tier, signaling a shift from pure transistor scaling to physics-based computation for AI.
CEO Patrick Bowen emphasizes reducing energy bottlenecks by moving more computation into optics before electronic conversion, enabling faster AI inference.
Bowen also notes that shrinking optical transistors allows more computation in the optics domain prior to conversion, accelerating inference and improving energy efficiency.
End-to-end performance will hinge on memory bandwidth, activation handling, model sparsity, and interconnects; the design envisions optics handling linear layers while electronics manage nonlinear activations and control.
Summary based on 11 sources
Get a daily email with more Startups stories
Sources

SiliconANGLE • Jan 22, 2026
Chip startup Neurophos gets $110M to replace electrons with photons and accelerate AI compute - SiliconANGLE
HPCwire • Jan 22, 2026
Neurophos Secures $110M Series A to Launch Exaflop-Scale Photonic AI Chips - HPCwire
