NobodyWho raises €2M to challenge Big Tech’s cloud AI with SLMs for local devices

NobodyWho’s SLM tech promises privacy, efficiency, and climate-aligned AI that runs where the data lives.
NobodyWho raises €2M to challenge Big Tech’s cloud AI with SLMs for local devices

A David vs. Goliath shift is emerging in AI as a small Nordic team is challenging Big Tech’s cloud-LLM dominance by bringing Small Language Models (SLM) directly to users’ devices  

Copenhagen-based open-source startup NobodyWho has raised €2 million in pre-seed funding to accelerate Europe’s ability to compete in global AI by championing Small Language Models as a cost-efficient, data secure and sustainable climate-aligned alternative to today’s massive cloud LLMs.  

I spoke to founder to CEO, Cecilie Waagner Falkenstrøm, to learn all about it.

NobodyWho is founded by award-winning entrepreneur and artist Cecilie Waagner Falkenstrøm, whose pioneering work with interactive AI dates back to 2016. Together with co-founder and CTO, Asbjørn Olling, and a team of software engineers, she has spent nearly a decade advancing local-AI technologies — from UN-commissioned projects to a 2021 edge-AI experiment aboard NASA’s International Space Station. 


Today’s cloud-based LLMs are controlled by a handful of non-European tech giants and require massive computational resources, constant internet access, and the transfer of vast amounts of data to third-party servers. This creates high costs, lock-in, and a structural loss of European data security. 


NobodyWho takes a fundamentally different approach. Its engine enables Small Language Models (SLMs) to run locally on laptops and mobile phones, so organisations and individuals keep full control over their data. With device-first architecture, no data needs to leave the device enabling true data sovereignty and privacy by design. 

Waagner Falkenstrøm explains:


“These models are still large by most standards, but they’re much smaller than systems like ChatGPT. They’re comparable to earlier generations of large modelsn— and they’re more than capable for many real-world use cases.”

Local inference as a security and privacy advantage

Running models locally has immediate privacy and security implications. From a security standpoint, this also creates a more resilient architecture.

Rather than relying on a single, centralised cloud server that can be targeted, computation is distributed across thousands—or even millions—of devices. Further, local inference shifts the cost burden away from cloud infrastructure entirely.

Users bring their own hardware, meaning the system can scale without increasing inference costs. Whether an application serves ten users or ten million, there is no escalating cloud bill.  This makes advanced AI accessible to organisations that would otherwise be priced out, including NGOs, public-sector bodies, and early-stage startups.

A 500x reduction in AI’s carbon footprint

NobodyWho’s local-first SLM architecture dramatically reduces this footprint by shrinking the models and moving them close to where they are used.

Early benchmarks show up to 100x lower training footprint and up to 500x lower inference footprint. So this approach isn’t just cheaper and faster — it’s also dramatically more sustainable.

Open source by default

Everything core to NobodyWho is open source — from its inference engine and inference libraries to its developer integrations — and that will remain the case. Rather than building proprietary language models, NobodyWho focuses on the infrastructure layer that makes existing open-source models usable in real-world products. Its engine enables more than 10,000 open-source language models to run efficiently across devices and operating systems.

“Our belief is simple: the models already exist,” says founder and CEO Cecilie Waagner Falkenstrøm.

“The real bottleneck is making them practical to deploy — especially for developers who don’t have machine-learning expertise.”

Most developers, she explains, aren’t ML specialists. NobodyWho’s goal is to make running a local language model as straightforward as integrating any other software dependency.

“An app developer should be able to run a local model with two lines of code,” she says.

“You shouldn’t need a PhD in machine learning to ship AI.”

To achieve this, NobodyWho integrates directly with major developer frameworks.

The company recently launched Python support and is expanding into additional ecosystems, allowing developers to drop NobodyWho into existing projects without deep ML knowledge or custom infrastructure work.

The company operates on an open-core business model. While all core components remain open source, NobodyWho monetises fine-tuning services — an area where compute requirements quickly become expensive and operationally complex for teams to manage alone.

“Companies could fine-tune models on-prem,” Waagner Falkenstrøm explains.

“But that means servers, engineering time, and ongoing maintenance. Instead, they can fine-tune models using our engine, pay for the compute, and we take a small cut. It’s still significantly cheaper and simpler than doing it themselves.”

Once a model is fine-tuned, it can be deployed to millions of end users with no additional inference cost, a key advantage of running models locally rather than in the cloud. Small language models, running where the data lives I was curious what the trade-off is from using SLMs instead of LLMs.

“Historically, making an API call to a cloud-based model was easy, and running models locally was hard. "We've solved that problem,” shared Waagner Falkenstrøm.

“With NobodyWho, the traditional complexity gap of local inference is effectively removed.”

There are still some use cases where very large models are necessary — extremely broad or complex reasoning tasks. Those models won’t disappear. But for most real-world business applications, such as chatbots, HR assistants, customer support, and domain-specific tools,  Small Language Models are more than sufficient, especially when fine-tuned. “Fine-tuning smaller models is also easier, explained Waagner Falkenstrøm.,

“You need less data and less compute, and you get more controllable behaviour. Most companies operate within specific contexts, and small models excel there.”

NobodyWho uses the European Public License (EUPL) 1.2, which explicitly allows both individuals and companies to build commercial products on top of its code — a deliberate choice aimed at driving real-world adoption with cross-platform support across mainstream operating systems and development frameworks included.

“If you want genuine uptake, commercial use has to be allowed,” says Waagner Falkenstrøm.

“Otherwise you don’t get an ecosystem — you get a demo.”

Over 5,000 devs building

The ecosystem is already forming. NobodyWho now has more than 5,000 developers building with the platform via GitHub, alongside an active Discord community where contributors discuss use cases, share feedback, and help shape the roadmap.

“The open-source aspect is critical,” Waagner Falkenstrøm adds. “It’s what allows a real community to emerge — not just users, but contributors.”

With the platform well beyond MVP, the company’s focus has shifted firmly to scale. “We’re past the experimentation phase,” she says.

“Now it’s about expanding framework support and enabling more developers to build production-grade applications.”

She believes the next leap forward in AI will come from making models smaller, more local, and more human-centric. 

“The future of AI won’t be won by size, but by decentralised models that anyone can run on their own devices.”

Investors view the rise of local, energy-efficient AI as a major strategic opportunity for Europe, especially as demand grows for privacy-compliant and cost-effective alternatives to cloud models.

The round is backed by PSV Tech and The Footprint Firm, and Norrsken Evolve.

“I’ve known Cecilie for nearly a decade and have seen first-hand how she consistently turns bold ideas into real, working technology,” says Christel Piron, co-founder and General Partner at PSV Tech:

“Backing NobodyWho was a no-brainer for us: this is an exceptional team building critical European AI infrastructure that is privacy-protecting, energy-efficient, and accessible to developers and companies everywhere”.

Sofie Käll, CIO at The Footprint Firm, shared:

“NobodyWho is pioneering the infrastructure that makes these ultra-efficient models truly plug-and-play for developers. 

This is a transformative climate-tech opportunity in one of the fastest-growing emissions categories, and we’re excited to support a team capable of moving the industry toward more responsible AI." 

Waagner Falkenstrøm contends that no matter what we do, Europe will not outcompete the US or China in the “bigger is better” game. 

“The compute, capital, and hyperscale infrastructure simply aren’t comparable.

“But from our experience, we knew that smaller models are genuinely powerful in many fields. That creates an opportunity for Europe to compete differently.”

At the same time, there’s a strong values-based dimension to what NobodyWho is doing.  She asserts:

“Coming from the EU and the Nordics, we care deeply about data security, GDPR compliance, sustainability, and data sovereignty. NobodyWho is designed to reflect those values.”

“Technology is power”

Waagner Falkenstrøm asserts that Europe needs to believe in itself. We have some of the best education systems, software engineers, and research institutions in the world.

“We don’t need to copy the US — we need to build AI that reflects European strengths and values. Further, technology is power.

If we care about privacy, sovereignty, sustainability, and democratic control, those values must be embedded in the technology itself. That’s what we’re trying to do with NobodyWho: decentralised, open, privacy-preserving AI that anyone can build on.”

Waagner Falkenstrøm sees NobodyWho as part of the first real wave of companies building infrastructure specifically for Small Language Models.

“A year ago, SLMs weren’t widely discussed. Today, developers and investors understand the category. The models are improving rapidly, and the tooling has matured.

Big tech companies will enter this space — but they’ll optimise for their own ecosystems. Apple will build for Apple. Microsoft will build for Microsoft. We’re platform-agnostic. That creates a meaningful opportunity.”

Follow the developments in the technology world. What would you like us to deliver to you?
Your subscription registration has been successfully created.