Multiverse Unveils HyperNova 60B: 50% Compression Model Drives AI Efficiency and European Sovereignty

February 25, 2026
Multiverse Unveils HyperNova 60B: 50% Compression Model Drives AI Efficiency and European Sovereignty
  • Market impact includes enabling SMEs, edge deployments, regulated industries, and research institutions to access advanced AI, potentially accelerating adoption across multiple sectors.

  • Multiverse is reportedly pursuing a new funding round around €500 million that could value the company above €1.5 billion, though formal ARR figures remain unconfirmed.

  • At about 32GB, HyperNova 60B matches or closely approaches the accuracy of the full model, with improved tool-calling and agentic coding features now accessible on Hugging Face.

  • Multiverse has a global footprint with offices in Europe, North America, and partnerships with Iberdrola, Bosch, and the Bank of Canada, plus collaborations with Aragón’s regional government.

  • Multiverse Computing released HyperNova 60B 2602, a 50% compressed version of GPT-OSS-120B, freely available on Hugging Face to boost efficiency with minimal performance loss.

  • Industry experts see compression as essential for sustainable AI, with the compression market projected to grow significantly as efficient models gain prominence.

  • The 60B-2602 model retains near-identical tool-calling capabilities to the larger 120B while operating at half the size, reducing memory from 61GB to 32GB.

  • Documentation, benchmarks, and integration guides are published on Hugging Face to help IT leaders evaluate performance, security, and fit before large-scale rollouts.

  • The 2602 upgrade enhances tool calling and agentic coding for high-cost workloads that strain compute budgets.

  • The company positions itself around sovereign AI and an expanding open-source model release strategy to reduce dependence on closed stacks and support enterprise experimentation.

  • The release aligns with Europe’s sovereign AI push, emphasizing efficiency, open access, and local experimentation to reduce dependence on U.S.-based providers.

  • Enterprise-friendly, locally runnable models are highlighted to preserve data residency and governance, contrasting with reliance on external hyperscale services.

Summary based on 6 sources


Get a daily email with more Tech stories

More Stories