The “first-of-its-kind” agreement between the US and European Union to enhance cooperation in artificial intelligence (AI) is a major step forward in the development of AI – and has significant consequences for startups as well as governments. Far from the consumer-driven world of ChatGPT, the agreement will see collaboration on research that addresses global issues such as climate change, healthcare, and emergency response.
However, with significant concerns around trust and ethics in AI, this announcement is also likely to see growing calls for AI to become regulated. The adoption of AI by organisations has more than doubled since 2017, meaning the impact of increasing regulation will be far-reaching for Europe’s startup ecosystem.
AI regulation is coming
While previous AI collaborations between the US and Europe focused mainly on privacy, this first ‘sweeping’ AI agreement is designed to speed up and increase the efficiency of government operations and services. Energy grids are just one example of how citizens could notice a difference. As data is collected on how electricity is being used and where it is generated, proactive steps could be taken to redirect energy and balance the grid so that freak weather conditions or surges in demand don’t result in power failures. This, of course, is particularly relevant during the winter with high energy costs across Europe.
With businesses racing to put their AI developments into practice and roll out applications into the market, calls are growing louder for clearer rules about how AI is controlled. The European Union is leading the charge in drafting a regulatory framework and its AI Act is now making its way through the European Parliament. As with the EU’s introduction of GDPR in 2018, the EU AI Act could become a global standard that determines the role AI is allowed to play in our everyday lives.
What will regulation look like? At this stage, it is still hard to tell, but the indications from the EU AI Act are that different levels of risk will be assigned to different AI applications. These will very likely look at applications including human oversight, transparency, risk management, cybersecurity, data quality, monitoring, and reporting obligations. Materials will start to become available on how companies can comply with the regulations, which means it is essential that businesses – from start-ups to multinationals – have the capability to understand the coming regulation and take action to avoid any violation or penalty.
With the regulation not yet in force, now is the time for companies to manage the risks associated with AI. One way to do this is through an audit of all AI systems used in the company, with a team looking at data risk and establishing an AI governance structure to ensure that business operations are not threatened should systems need to be removed when regulation comes in.
Show me the data
Data is the most valuable asset that companies and governments own. Having access to large and comprehensive datasets is the key to building representative AI models that make “logical decisions” – and is the Holy Grail for the wave of emerging start-ups across Europe working in this field.
Unsurprisingly, the private sector is ahead of the public sector when it comes to understanding the technology behind AI. The recent US-EU agreement on AI is an example at the governmental level of what the most advanced organisations have started to work on over the past few years – a federated data infrastructure.
As many companies find they cannot move their own data or the data of their partners across boundaries, they turn to federated machine learning and analytics as the solution.
In very simple terms, this means connecting datasets to get a more complete perspective. By combining many sources of data, you can train a more comprehensive model and the end result is a more accurate, representative, and trusted output. The explosion into the public consciousness of ChatGPT has shone a light on what might be possible but also raised serious questions on bias, trust, and accuracy – all of which are connected to the datasets used to build the technology.
If the speed and efficiency of government services and business operations are going to be increased – along with continued progress by the start-up ecosystem – AI has to be trusted to make better decisions and draw smarter conclusions. However, moving the large amounts of data required to achieve this is not only expensive but often impossible due to regulatory constraints or issues with IP. A connected, federated infrastructure that leaves data where it resides prepares companies no matter their size for accelerated AI adoption in a cost-effective manner.
How can businesses work with data?
What this means for businesses is a rethink on how they use their own data and how they access data from others. Having a federated data infrastructure – where the data doesn’t move and you can work within regulatory, compliance, and organisational boundaries – will be key to ensuring that organisations can collaborate on data.
Regulation does not need to be a byword for the stifling of innovation. But agreements such as the US-EU AI agreement and the draft EU AI Act do provide insight into the future of AI regulation. Legal frameworks will come sooner or later in order to mitigate the risks associated with AI. The worst case scenario is that companies unwittingly fall foul of AI regulations and compromise their business by either adopting AI too fast or by not really understanding how it is being used.
For companies innovating with AI there is no time to wait to learn about how AI regulations might affect you.
Robin Röhm is the CEO and co-founder at Apheris, a Berlin-based organisation providing a platform for federated and privacy-preserving data science.
Lead image: Shubham Dhage
Would you like to write the first comment?
Login to post comments