The launch of DeepSeek in February sparked discussions about Europe's position in the AI race. While touted for its competitive pricing and efficiency, the LLMs open-source nature and data storage in China has also raised serious security concerns, with Italy banning its use and others restricting its use by government agencies and contractors.
Enter OpenEuroLLM project
February also saw the launch of the OpenEuroLLM project, which aims to improve Europe's competitiveness and digital sovereignty while lowering thresholds for European AI product development and refinement.
It's timely, as many people have wondered what companies in Europe could compete with DeepSeek. Mistral AI (France) and Aleph Alpha (Germany) were most often mentioned. However, the latter has shifted from foundational LLMs to providing AI infrastructure and platforms for enterprise and government clients.
OpenEuropeLLM is a consortium of 20 leading European research institutions, companies and EuroHPC centres coordinated by Jan Hajič (Charles University, Czechia) and co-led by Peter Sarlin (AMD Silo AI, Finland). OpenEuropeLLM is building a family of performant, multilingual, large language foundation models for commercial, industrial and public services.
Significantly, the models will be developed within Europe's robust regulatory framework, ensuring alignment with European values and cooperation with open-source and open-science communities to ensure the models, software, data, and evaluation will be fully open and can be fine-tuned and instruction-tuned for specific industry and public sector needs.
However, there's another argument about Europe's standing in the AI race. Can we foster innovation and close the AI innovation gap by investing in smaller, specialised AI models and applications?
SLMs are another way forward
Anita Schjøll Abildgaard, CEO of EU-funded Iris.ai, argues that Europe's AI future depends on embracing smaller, domain-specific models (SLMs) and open-source collaboration.
With Europe's data centre power demand set to triple by 2030, it's compelling to envision a different approach to AI — one that doesn't rely on ever-larger models consuming vast energy resources.
I spoke to Abildgaard, and Iris.ai CTO and co-founder Victor Botev to learn more.
Abildgaard contends:
"While there's been some movement—like the €200 billion InvestAI plan, Open Euro LLM, and other initiatives—it still doesn't amount to a major push.
The momentum is real, but it comes with big caveats: how exactly will the funding be allocated, and over what timeframe?"
Small language models bridge AI's real-world adoption
SLMs need foundational models to generate the high-quality data necessary to train small ones. Thus, for Europe to be competitive, it needs a presence in both foundational and small models.
However, SLMs are cheaper, can be tailored to specific business needs, and are more practical than large general-purpose models.
Further, according to Botev, most business use cases don't need the full power of massive LLMs.
"If you can distill just the knowledge you need into a smaller model, it's more efficient and cost-effective. That's been our focus — using small models to address real workflows."
SLMs work anywhere you need fast, efficient decision-making — for example, in agent-based workflows where several small models collaborate. They also work well in domains like chemistry and healthcare.
According to Botev, "Training large models to understand DNA or chemical structures risks "catastrophic forgetting" — where they lose existing capabilities. Small models let us specialise without that risk."
Further, SLMs provide mammoth energy savings. Botev shared:
"A 1B parameter model needs far less compute than a 400B one — potentially 60,000x fewer resources. Small models can run on older GPUs with lower power use. They're not only cheaper but greener too, which aligns with Europe's sustainability goals."
Transparency and collaboration as European advantages
According to Botev, open source collaboration is exploding thanks to projects like DeepSeek:
"They've open-sourced distillation methods, which let people build small models from large ones. There are now over 10,000 distilled models on Hugging Face.
Reinforcement learning is also proving effective — even rivalling supervised fine-tuning. That's huge for customisation.
However in Europe have expertise in safety, guardrails, and efficient systems — we should double down on that."
Abildgaard suggested that European startups could take inspiration from open-access publishing.
"If you get EU funding, maybe you should be required to open-source some of your work, especially foundational models. It would drive collaboration and transparency — areas where Europe leads."
In terms of collaboration, Iris.ai has partnered with Sigma2 AS is responsible for providing the national e-infrastructure for computational science in Norway, which offers services in high-performance computing (supercomputing) and large-scale data storage for research and educational purposes.
The company uses Sigma2 to train and domain-adapt small models (1–9B parameters), and to evaluate system components.
According to Botov, evaluation is often overlooked, "but it's compute-intensive and critical for scaling systems with multiple agents and retrieval layers."
Iris recently launched a new business line — a powerful RAG (retrieval-augmented generation) system, a decade in the works. According to Botev, Iris-ai's RAG system is agent-based and packed with small models.
We need more SLMs
That said, SLMs (or, indeed, local competitors to LLMs) have been slow to emerge in Europe.
One stand-out company is Malted AI (Scotland), which takes the output of large models and distils them into smaller models. Its technology allows enterprises to apply small language models (SLMs) that solve domain-specific problems with 10-100x cost savings. Instead of doing thousands of tasks moderately well, Malted AI's SLMs do one task nearly perfectly.
Rather than chasing scale alone, Europe could lead by championing smart, open, and sustainable AI. That means investing not only in foundational models but also in the ecosystems that enable agile development, collaboration, and fine-tuning. Ultimately, the future of European AI might not be about building the biggest model — but the smartest, most specialised one.
Would you like to write the first comment?
Login to post comments