It's not every day that someone leaves a job in one of the world's most successful fabless semiconductor companies – ARM – to join a startup, but Noel Hurley met a startup that was solving a problem in Edge AI that had plagued the sector for decades.
When he first came across Literal Labs in the summer of 2023, it was spinning out of Newcastle University in collaboration with the Centre for AI Research in Norway. They had been working for five years on logic-based AI and what's called second machines – more on that shortly.
Having spent 30 years in the processor and computer science space, Hurley was familiar with the challenges around AI adoption — especially in industrial markets. He admits, "promises were made about edge AI, but progress was limited."
"Neural networks required expensive new hardware, consumed a lot of energy, and were slow to deploy."
When he saw Literal Labs' research results, he had an "aha!" moment. Here was a technology that solved many of those problems — and could run on existing hardware.
"That meant we could engage customers early and deploy quickly. It was clear to me this wasn't just interesting research; it was the basis for a company," he shared.
I sat down with Hurley to learn about Literal Labs.
What exactly is a second machine?
Today's neural networks are built around multiplication — multiplying numbers together. Multiplication is an expensive operation on a chip: it requires large circuits and burns a lot of energy. That's why our power consumption skyrocketed.
"Many chips now advertise 'neural network accelerators,' which are essentially just large arrays of multiplication circuits," contends Hurley, explaining that a second machine works differently.
"Instead of heavy mathematics, it uses propositional logic—'if/then" statements — combined through a voting algorithm.
During training, the model decides whether to include, exclude, or ignore each of these statements. The result is a dense network of logic that can be deployed onto silicon far more efficiently."
Low-cost meets low power
Literal Labs' AI models are designed to run on very low-cost, low-power hardware — specifically, devices priced under $5. dollars.
These devices are typically modest microcontrollers or system-on-chip units rather than sophisticated, high-performance computing platforms. The key point is that no GPU or specialised accelerators (like TPUs or custom AI chips) are required for inference.
Hurley attributes Literal Labs' lower-energy results to logic-based circuits, which are more energy-efficient than multiplication circuits.
"Our approach is about matching algorithms to the strengths of existing silicon, rather than forcing silicon to handle operations it wasn't optimised for."
Most of the operations come down to lookups or comparisons, which microprocessors already handle extremely efficiently, according to Hurley.
By replacing multiplication-heavy circuits with logic-based circuits, you can achieve similar outcomes at a fraction of the cost and energy.
54× faster, 52× greener
If this all sounds a bit complicated, well, the results speak for themselves when it comes to MLPerf benchmarking standards — these provide fair, representative, and repeatable ways to measure how well different hardware and software systems run AI workloads.
Literal Labs' benchmarks show dramatic results: 54 times faster performance and 52 times less energy use compared to equivalent neural networks.
According to Hurley, when he joined in October 2023, the team were already seeing speedups ranging from 5x to 250x over traditional algorithms.
"Last year, when we published our MLPerf benchmarks, we confirmed 54x faster performance with 52x lower energy consumption.
What was even more encouraging was that the datasets used in MLPerf were significantly larger and more complex — up to 400 gigabytes compared to the one-megabyte sets we tested earlier.
Despite this increase in complexity, the gains held up. That demonstrated the robustness of our approach."
Further, logic-based AI is naturally explainable, ensuring accountability for the model's decision-making.
Literal Labs shows a cheaper path forward in edge AI
Literal Labs sees immediate traction in industrial and edge AI. All in all, Literal Labs creates true commercial value because it reduces system costs by lowering compute complexity, inference costs, and bill of materials:
"Think about battery-powered devices, safety-critical products, or heavily regulated markets. In these environments, explainability, energy efficiency, and compute constraints all matter," Hurley explains.
Historically, attempts to apply AI in these markets either failed or were severely limited. Companies couldn't afford to replace equipment already in the field, so they tried to bolt on connectivity, send data to the cloud, and process it there. That added costs, dependencies, and supply chain complexity without delivering a clear bottom-line return.
"By contrast, what excites customers about our approach is the ability to deploy AI directly onto existing devices—without expensive upgrades. We can bring intelligence to the edge in places that were previously off-limits," he shared.
Literal Labs empowers engineers to train their own models
Part of Literal Labs' vision is to let customers train their own models. Literal Labs' commercial product is a toolchain that allows customers to train models on their own datasets. The target user is a competent software engineer — not necessarily a machine learning specialist.
According to Hurley, the tool is highly automated:
"Typically, you don't just train a single model—you train hundreds, then prune and select the best. We've built automation into that process.
Customers can run it on-premise, in their private cloud, or directly at the edge. This has several advantages: it addresses data sensitivity concerns for customers unwilling to send datasets off-site, and it makes adoption easier by fitting into their existing infrastructure."
Company CTO, Leon Fedden, previously led the deep learning platform at AstraZeneca. He brings that expertise in combining classic AI techniques with automation to ensure its toolchain is robust and scalable.
The company is collaborating with utilities to develop smart wastewater systems, where sensors can identify what constitutes "normal" and "abnormal" flows, triggering early warnings. The same applies to electricity networks or other utility grids with vast numbers of remote sensors.
Another key area is machine health, which involves predicting wear and tear and sending maintenance before a machine fails. That's hugely valuable in industrial settings.
Right now, it's focusing on time-series data — such as vibration sensors or audio — and on tabular data. These domains are full of opportunities for better forecasting and process decisions.
"We're building capability for image data as well, but our initial focus is time-series and tabular," explains Hurley.
By avoiding costly hardware swaps, Literal Labs eases Edge AI adoption
Many startups in edge AI have struggled to commercialise, due to the challenge that deploying AI in many instances requires changing hardware. Companies didn't want the expense or disruption of installing new equipment. Hurley explained:
"Our advantage is that we don't require special accelerators—just a standard microprocessor, which every industrial IoT device already has.
The only fundamental constraint is whether there's enough memory available for an additional function. That's a big difference from approaches that depend on entirely new hardware."
Currently, Literal Labs is running five proof-of-concept projects with customers and aims to launch our product in the second half of this year.
Hurley admits that AI is a noisy space for startups:
"A lot of promises get made. But we see ourselves as a disruptor. Our strategy is to stay focused: find strong problem areas, work closely with customers, and deliver measurable value. That was something I learned early on at ARM.
Robin Saxby, ARM's first CEO, always stressed focus.
I joined as employee number 40-something, and that lesson still applies today."
Literal Labs' focus this year is on execution: expanding the team, delivering proof-of-concepts, and preparing for its product launch. From there, it'll broaden its data capabilities beyond time-series and tabular, and continue building out customer-facing tools.
In the longer term, the vision is to make logic-based AI a mainstream alternative to neural networks, especially in energy- and compute-constrained environments.
Would you like to write the first comment?
Login to post comments