GUEST AUTHOR

The stalled EU AI Act: Implications for mental health and equality

Following nearly 24 hours of negotiations, EU lawmakers have paused today's debate. Aligned AI co-founder and CEO Rebecca Gorman makes the case for urgency.
The stalled EU AI Act: Implications for mental health and equality

Finalisation of the EU AI Act has stalled over the questions of whether and how to regulate foundation models and biometric uses of AI.

Meanwhile, engagement algorithms behind content platforms are causing teenage depression, suicide, accidental deaths in young children, and radicalisation of adults.

Qualified and competent women, minorities, and people who don’t like anime are being denied jobs on the basis of their gender, minority, or not liking anime.

Prisoners are being denied parole on stereotypes rather than merit.

Content sites are sharing opportunities and information based on stereotypical characteristics rather than merit and usefulness.

Smart regulation of foundation models and biometric AI is good. Delaying regulation of other types of high-risk and dangerous AI systems until we come to an agreement is not.

Foundation models can cause the same types of risks as those I’ve listed above, and add some new ones. They also introduce, for the first time, the ability to mitigate many of the above risks with an automated system. Have your content-serving algorithm ask a foundation model whether showing Tommy this piece of content after the previous ten he’s viewed will worsen his mental health, and for pennies, you’ve saved thousands of young lives.

The EU — and the world — deserves an official statement that outrageous harms from AI systems are unacceptable outputs from commercial entities at the cutting edge of technical innovation. It does not need to be the last such statement ever made, and it does not need to be complete. Best not to send the message that every harm is acceptable until all have been fully defined.

However, member states of the EU need not wait until the finalisation of the EU AI Act in order to begin mitigating the harms of AI. States already have legislation and precedents that can be applied to the organisations inflicting societal harm.

Even without the AI Act, endangerment, exploitation of vulnerable people, and targeting of protected groups are against the laws of each jurisdiction, regardless of the tools and methods used to generate such harm. The EU, nor the rest of the world, is not helpless without the EU AI Act

Member states of the EU have the responsibility and the opportunity to enforce their existing laws and regulations to protect their citizens and society from the negative effects of AI systems. The ongoing failure of the EU to legislate has opened a window for member states to exhibit the robustness and versatility of their existing legal systems.

Lead image: fabrikasimf


Rebecca Gorman is the co-founder and CEO of Aligned AI, an Oxford-based startup building safer and more capable AI. She is a serial entrepreneur, seasoned technologist and AI expert. Rebecca built her first AI system 20 years ago and has advocated for responsible AI for over a decade. She has co-developed several advanced methods for AI alignment and has advised the EU, UN, OECD, and the UK Parliament on the governance and regulation of AI.

Follow the developments in the technology world. What would you like us to deliver to you?
Your subscription registration has been successfully created.