News
Feb 25, 2026
News
Startups
Artificial Intelligence
Europe
NewDecoded
3 min read

Image by Multiverse Computing
Multiverse Computing, a leader in AI model compression, announced the release of HyperNova 60B 2602 on Hugging Face this week. This new iteration is a 50 percent compressed version of OpenAI's gpt-oss-120B, designed to make high-performance AI accessible to the global developer community. By reducing the memory footprint from 61GB to 32GB, the model can now operate on significantly lower infrastructure while preserving advanced reasoning capabilities.
The core of this breakthrough lies in CompactifAI, the company's proprietary technology that utilizes quantum-inspired mathematics to reorganize neural networks. This method identifies and preserves information-rich components, allowing the model to stay within a 2 to 3 percent margin of its original accuracy. Unlike standard pruning or quantization techniques, this approach ensures that intelligence is not sacrificed for size.
Significant performance gains characterize this latest update, particularly in autonomous and agentic tasks. The model shows a fivefold improvement in agentic tool use on the Tau2-Bench and a doubling of performance in terminal-based coding benchmarks. These enhancements reflect a commitment to iterative improvement based on real-world developer feedback and usage patterns.
CEO Enrique Lizaso Olmos noted that compression is an ongoing journey of optimization rather than a single event. The company aims to empower developers to experiment and deploy efficient AI without requiring massive infrastructure investments. This strategy supports a broader goal of providing sovereign solutions that work across enterprise, research, and public sector environments.
Looking forward, the company plans to release more open-source models throughout 2026 to address various use cases. From large-scale enterprise systems to edge-level applications, the focus remains on eliminating the trade-offs between size and accuracy. Developers can now access the model weights and technical documentation directly on the Multiverse Computing Hugging Face page.
The release of HyperNova 60B 2602 signifies a pivotal shift in the AI industry toward the democratization of frontier-level intelligence. By halving the hardware requirements of OpenAI's gpt-oss-120B while maintaining its reasoning power, Multiverse Computing is effectively lowering the barrier for enterprise-grade autonomous agents. This move highlights a growing trend where optimization and iterative compression are becoming as critical as raw parameter count. As hardware constraints remain a bottleneck for many, such sovereign AI solutions enable smaller organizations to deploy advanced capabilities locally without the need for massive GPU clusters.
Related Articles