After prolonged negotiations, complaints, and political jockeying, the EU AI Act is finally here.
Well, sort of.
The law regulating AI in Europe — the first major piece of AI legislation in the world — reached a provisional deal after a marathon 37-hour negotiating session in Brussels last week. The full text is yet to come, but those of us working in the European AI landscape now have to urgently make sense of the law so that in two years time we will be fully compliant once it takes full effect.
Risk-based approach
Here’s what we know for sure.
Many of the kinds of science-fiction-like applications of AI that sound like they come from an episode of “Black Mirror” are effectively outlawed. One can’t, for instance, spin up a biometric categoriser that sorts people based on race or sexual orientation. Nor can your company’s HR department use emotion recognition technology to rank-order which employees look the saddest at the office. Companies violating such rules could face fines ranging from 1.5 percent to 7 percent of global annual turnover.
These are the sort of restrictions that even the biggest tech optimist can get behind. Of course, AI has risks that need to be mitigated. So do the pharmaceutical and energy industries. As expected, the law takes a risk-based approach, with rules established on transparency, oversight, and cybersecurity for “high-risk” AI.
But for all the talk about risks, there has been far too little discussion about the rewards.
Opportunity lost
EU leaders have talked about how they want to foster a positive environment for tech innovation. But it’s mostly just been that — talk, not action. As Internal Market Commissioner Thierry Breton wrote on X after an agreement was reached, the act is “a launchpad for EU startups and researchers to lead the global AI race."
Now the EU should put its money where its mouth is. It would be more of a launchpad if regulators had used this opportunity, for instance, to also announce a serious investment in AI research and development.
The rest of the world tends to make fun of us, thinking we would rather regulate than create. Announcing a big AI investment in tandem with a regulatory framework would have been a golden PR opportunity to silence those critics.
I’m still optimistic that Europe’s AI ecosystem will continue to flourish. Now that the rules are set, we can all start playing to win. But I’d still like to see more commitment from the EU itself.
Or at the very least an explanation of what support will be provided to nurture young startups that will surely find it much more costly and challenging to navigate the EU AI Act’s bureaucracy than Big Tech, with their deep pockets and teams of lawyers.
Or more details on how AI solutions can move from the regulatory testing “sandbox” and into the marketplace swiftly.
Pragmatic regulation
The fact is, Europe — with the eyes of regulators and tech leaders across the globe upon us — has an opportunity to communicate that we are not only focused on leading the way on AI rules but also leading the way on bolstering the AI startup community.
It’s quite an easy story to tell actually.
Europe is already positioned to be the leader when it comes to AI application, thanks to our rich network of research institutions, talent, and strong industry knowledge.
Some of the most exciting AI startups are hard at work in Europe. Money already is flowing in, and there will be lots more investment to come. But if we force startups to have five compliance people out of the first 10 employees, we of course send the wrong signals. Living with and implementing this regulation has to be pragmatic.
Taking on Goliaths
It’s helpful to take a step back and think about how we got here. Originally, The EU AI Act was discussed before the emergence of ChatGPT, a lifetime ago in the age of AI.
Regulators decided to focus on the risks of an AI application, rather than policing the technology itself. (For many AI practitioners, this regulation isn’t a huge upheaval. Just like with GDPR, this is mostly about documentation and transparency).
After the rise of ChatGPT and other large language models, some lawmakers called for “foundational models” to be regulated as well.
These models, with varying degrees of openness, are the platforms upon which entrepreneurs can build future AI solutions that will change the world. They are also important to the European ecosystem desperate to see its own large language model makers take on the Microsofts and Googles of the world.
The push to regulate foundational models prompted pushback from some in the tech community, with countries home to European FMs, such as Germany and France, wishing to protect those models from stringent rules and reporting requirements.
Ultimately, regulators seem to have split the difference, placing guardrails on FMs based on their level of computing power, but more or less exempting “open-source models.”
Regulating in real-time
Whether you think the law was too severe or too permissive, the point is that the technology is moving so fast that it’s almost impossible to regulate it in real-time. How the law is interpreted and implemented in practice will be the real test of its effectiveness and value.
There are still many questions that need clarifying. For instance, where exactly does the liability fall when it comes to an AI application — upon the foundational model upon which it runs, or with an application developer?
What happens when the law changes to account for computing power passing where we are now, which is destined to happen with the ongoing development of the next generation of foundational models?
How do we ensure that Europe can stay competitive and not fall behind Silicon Valley given the rapid developments in our industry? Especially if regulators in the US can see what we’ve done in Europe and perhaps tweak their own likely forthcoming regulations to become even more friendly to startups than Europe.
Head start, high quality
I believe that we in the European startup community must now be even more vocal to make sure that this law is not too burdensome on smaller players in the AI field, which are really crucial for making sure Europe wins the AI race.
We do have some initial advantages. Regulation on AI is coming across the globe, with lawmakers in the US and UK eyeing what we’re doing in the EU. As a first mover, Europe has a head start. That means “made in Europe” AI could come to mean “high quality” AI, essentially a safe home for investor money.
That’s why we must recognise that even high-risk categories can have high rewards, and we must encourage innovation even when it might seem scary or uncertain. The fact that we now have this risk classification should actually free up more investment into AI because you can be sure of the risk level of the investment.
We need non-AI companies to invest in AI solutions even if they are high-risk because of the benefits; therefore we would also like to see a reassuring campaign from European lawmakers that explains to worried non-AI companies that high-risk AI compliance is feasible and worthy of investment.
If anything, the values instilled in Europe should mean that “high-risk” European AI applications are certified to the highest possible standards imaginable.
Made in Europe
If we really want “made in Europe” AI to mean something — to be a stamp of approval connoting the highest ethical and technical standards — that means making Europe the friendliest home for the burgeoning AI startup community, which is mostly working on non-high-risk challenges.
It takes investment and a playing field where we can be competitive. It means making a statement by committing serious euros to AI. In short, it takes more than regulation.
Nicole Büttner is a member of the board at Merantix, an AI investor and venture builder. She is also the founder and CEO of AI solutions provider Merantix Momentum.
Lead image: Photo by Viktor Strasse.
Would you like to write the first comment?
Login to post comments