This week Members of the European Parliament (MEPs) are placing their vote on the adoption of the EU AI Act. As the world’s first law on AI by a major regulator, this is a pivotal moment that will shape how AI is developed in Europe and could serve as a blueprint for other regulatory authorities around the world.
While AI has been applied across industries for several years now – transforming everything from customer service to medical diagnoses – this year has seen an explosion in interest in the value the technology can bring to businesses and consumers alike. Reports forecast that by 2030, AI is estimated to increase global GDP by $15.7 trillion – more than that of China and India combined.
It's undeniable that AI is going to continue to change how we all live and work. For example, generative AI will not replace highly trained lawyers, but a lawyer using generative AI will certainly replace one who isn’t using the technology in the very near future. What is imperative is that we work together to find the right balance between the benefits of new technology and any unintended consequences, putting checks in place while also unlocking AI’s potential benefits.
Regulating based on what we know now
We cannot afford to run AI that humans cannot safely trust, so how do we put in place the right guardrails knowing that we might not have all the answers now?
First, we need to start by protecting against some of the fundamental issues in AI which we know now – such as transparency, bias and accuracy. Some of these are easier to address than others. The need for transparency, for example, can be addressed by introducing requirements for companies to be clear when using generative AI, such as when customers are communicating with a machine rather than a human.
Other issues are more challenging to solve, such as the issue of hallucinations, where the AI gives a confident but inaccurate response. This requires ensuring the AI is trained on the best data. Meanwhile tackling bias requires, at minimum, rigorous and constant testing of AI models throughout their design, development and deployment.
We still have more to do in tackling these issues, but we need to start laying the regulatory groundwork now. This will help build trust, drive industry adoption and, in time, enable communities to feel the societal benefits we know AI can offer, whether driving access to specialist health services or speeding up legal cases so justice can be served in a timelier way.
Secondly, we need to get comfortable with the concept of regulating amid a rapidly evolving landscape and being ready to correct course as we go. This is starting to happen - across the world, we are seeing governments race to regulate AI. In the EU, the AI Act looks to introduce new rules around the use of facial recognition, biometric surveillance and other AI applications. In the UK, Prime Minister Rishi Sunak has promised the UK will play a “leadership role” in drawing up “safe and secure” rules and announced it will host the first global AI regulation summit this autumn. Last week, the Labour Party called for the UK to bar technology developers from working on advanced AI tools unless they have a licence to do so.
Collaboration is needed to support AI development
The biggest investment we will make is ensuring our AI is responsibly built, and this is not just an industry issue, but also a societal imperative.
While it’s encouraging to see leaders taking a strong stance on this issue, one company or government cannot achieve this alone. We will require an industry-wide approach, bringing together governments, researchers, scientists and technologists to share their findings and data to collectively advance the field of AI. The resulting international approach to AI standards is critical to avoid creating a fragmented global regulatory framework, which will ultimately hinder AI innovation and its potential benefits.
To ensure we build human-centric and trusted AI systems, industry and government must also prioritise championing the diversification of talent working in AI, in terms of gender, race, geography, class, physical disabilities and more. Employers will need to recruit from a wider talent pool, invest in AI education and provide access to training tools and career opportunities. As diversity in the AI sector grows, the systems will become less biased and more representative of the communities they serve.
Finally, adaptive regulation such as regulatory sandboxes – controlled environments where innovators and businesses can test and develop new AI technologies in a neutral environment – will play a crucial role in AI regulation, as they can accommodate the rapidly evolving nature of the technology. Within a safe space, businesses and regulators can work together to understand how new technologies can be developed and regulated in a responsible manner. Adaptive regulation is powerful, as it recognises that technology is constantly evolving but still promotes the responsible deployment of AI systems.
Next steps in AI regulation
As this week’s vote on the EU AI Act demonstrates, Europe clearly acknowledges the risks associated with AI and is taking the appropriate steps to mitigate them. Implementing guardrails such as regulation will create trust and transparency, ensuring the benefits of AI are unlocked for numerous communities, not just a privileged few. Amid the fearmongering, it is easy to lose sight of AI’s enormous potential for society. As we move forward, industry and government need to converge to put in place a framework which balances risk mitigation, while also unlocking the opportunities AI offers in a safe and transparent way.
Would you like to write the first comment?
Login to post comments