Quantum-inspired AI model compression company Multiverse Computing has raised €189 million Series B.
It has developed CompactifAI, a compression technology capable of reducing the size of LLMs by up to 95 per cent while maintaining model performance.
LLMs typically run on specialised, cloud-based infrastructure that drive up data centre costs. Traditional compression techniques —quantisation and pruning — aim to address these challenges, but their resulting models significantly underperform original LLMs.
With the development of CompactifAI, Multiverse discovered a new approach. CompactifAI models are highly-compressed versions of leading open-source LLMs that retain original accuracy, are 4x-12x faster and yield a 50 to 80 per cent reduction in inference costs.
These compressed, affordable, and energy-efficient models can run on the cloud, on private data centres, or — in the case of ultra-compressed LLMs — directly on devices such as PCs, phones, cars, drones, and even Raspberry Pi.
“The prevailing wisdom is that shrinking LLMs comes at a cost. Multiverse is changing that,” said Enrique Lizaso Olmos, Founder and CEO of Multiverse Computing.
“What started as a breakthrough in model compression quickly proved transformative — unlocking new efficiencies in AI deployment and earning rapid adoption for its ability to radically reduce the hardware requirements for running AI models.”
With a unique syndicate of expert and strategic global investors on board and Bullhound’s Capital as lead investor, we can now further advance our laser-focused delivery of compressed AI models that offer outstanding performance with minimal infrastructure.”
CompactifAI was created using Tensor Networks, a quantum-inspired approach to simplifying neural networks.
Tensor Networks is a specialised field of study pioneered by Román Orús, Co-Founder and Chief Scientific Officer at Multiverse.
“For the first time in history, we are able to profile the inner workings of a neural network to eliminate billions of spurious correlations to truly optimise all sorts of AI models,” said Orús.
Compressed versions of top Llama, DeepSeek and Mistral models are available now, with additional models coming soon.
Bullhound Capital led the funding, with support from HP Tech Ventures, SETT, Forgepoint Capital International, CDP Venture Capital, Santander Climate VC, Toshiba, and Capital Riesgo de Euskadi - Grupo SPRI.
According to Per Roman, co-founder and Managing Partner, Bullhound Capital, Multiverse’s CompactifAI introduces material changes to AI processing that address the global need for greater efficiency in AI, and their ingenuity is accelerating European sovereignty.
“Roman Orus has convinced us that he and his team of engineers are developing truly world-class solutions in this highly complex and compute intensive field."
Enrique Lizaso is the perfect CEO for rapidly expanding the business in a global race for AI dominance. I am also pleased to see that so many high-profile investors, such as HP and Forgepoint decided to join the round. We welcome their participation.”
Tuan Tran, President of Technology and Innovation, HP Inc. commented:
“At HP, we are dedicated to leading the future of work by providing solutions that drive business growth and enhance professional fulfilment."
The investment will accelerate the widespread adoption of LLMs to address the massive costs that currently prohibit their rollout.
Would you like to write the first comment?
Login to post comments