News

Startups

Artificial Intelligence

Europe

Multiverse Computing Releases HyperNova 60B 2602 Open Source for Enhanced Agentic AI Workflows

Multiverse Computing has launched HyperNova 60B 2602 on Hugging Face, providing a highly efficient, 50 percent compressed model optimized for tool calling.

Multiverse Computing has launched HyperNova 60B 2602 on Hugging Face, providing a highly efficient, 50 percent compressed model optimized for tool calling.

NewDecoded

Published Feb 25, 2026

Feb 25, 2026

3 min read

High-Performance Compression for Global Developers

Multiverse Computing, a leader in AI model compression, announced the release of HyperNova 60B 2602 on Hugging Face this week. This new iteration is a 50 percent compressed version of OpenAI's gpt-oss-120B, designed to make high-performance AI accessible to the global developer community. By reducing the memory footprint from 61GB to 32GB, the model can now operate on significantly lower infrastructure while preserving advanced reasoning capabilities.

Quantum-Inspired Efficiency with CompactifAI

The core of this breakthrough lies in CompactifAI, the company's proprietary technology that utilizes quantum-inspired mathematics to reorganize neural networks. This method identifies and preserves information-rich components, allowing the model to stay within a 2 to 3 percent margin of its original accuracy. Unlike standard pruning or quantization techniques, this approach ensures that intelligence is not sacrificed for size.

Benchmark Success and Agentic Capabilities

Significant performance gains characterize this latest update, particularly in autonomous and agentic tasks. The model shows a fivefold improvement in agentic tool use on the Tau2-Bench and a doubling of performance in terminal-based coding benchmarks. These enhancements reflect a commitment to iterative improvement based on real-world developer feedback and usage patterns.

Vision for Sovereign AI

CEO Enrique Lizaso Olmos noted that compression is an ongoing journey of optimization rather than a single event. The company aims to empower developers to experiment and deploy efficient AI without requiring massive infrastructure investments. This strategy supports a broader goal of providing sovereign solutions that work across enterprise, research, and public sector environments.

Access and Future Releases

Looking forward, the company plans to release more open-source models throughout 2026 to address various use cases. From large-scale enterprise systems to edge-level applications, the focus remains on eliminating the trade-offs between size and accuracy. Developers can now access the model weights and technical documentation directly on the Multiverse Computing Hugging Face page.

Decoded Take

Decoded Take

Decoded Take

The release of HyperNova 60B 2602 signifies a pivotal shift in the AI industry toward the democratization of frontier-level intelligence. By halving the hardware requirements of OpenAI's gpt-oss-120B while maintaining its reasoning power, Multiverse Computing is effectively lowering the barrier for enterprise-grade autonomous agents. This move highlights a growing trend where optimization and iterative compression are becoming as critical as raw parameter count. As hardware constraints remain a bottleneck for many, such sovereign AI solutions enable smaller organizations to deploy advanced capabilities locally without the need for massive GPU clusters.

Share this article

Related Articles