High-risk Artificial Intelligence applications in use across the EU are set to come under the scope of new rules as part of the European Commission's bid to bolster trust in next-generation technologies, while also safeguarding opportunities for innovation.
As part of the plans unveiled by the EU executive on April 21, a narrow set of outright prohibitions of certain technologies has been put forward, including technologies that can be used for subliminal manipulation, the exploitation of vulnerable people, social scoring systems, and real-time biometric recognition applications by law enforcement authorities, including facial recognition in public spaces.
Technologies used for these purposes constitute a so-called 'unacceptable risk’ by the European Commission and would therefore be banned under the proposals, which now enter the inter-institutional process and will be debated over by EU member states and the European Parliament. The whole process before adoption could take many months and may even stretch into years, should MEPs or member states adopt particularly strong stances.
Outside of the prohibitions, the Commission has also introduced a risk taxonomy of sorts, seeking to cover other Artificial Intelligence applications that will not be banned outright, but will require greater human oversight.
Here, technologies deemed as 'high risk' will be subject to risk assessment procedures and the obligation that higher quality datasets should be used in the operation of the AI, so as to minimize potentially discriminatory outputs.
In terms of AI that falls under the scope of a more 'limited risk' category, the Commission would like to see greater transparency obligations, for example in the use of online chatbots, users should be made aware that they are communicating with a machine and not a human advisor.
“By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way,” the Commission’s Executive Vice-President for Digital, Margrethe Vestager said on announcing the plans.
“Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.”
Providing opportunities for innovation
In the Commission's attempt to balance trust and innovation in the European market for Artificial Intelligence, it has been keen to highlight that the 'vast majority' of AI systems will be left unregulated in the plans.
For their part, Europe’s startup ecosystem will be looking to analyze how the text may allow them to compete with larger and more-established players.
Speaking at a recent online event after the publication of the plans, the Commission’s Kilian Gross, responsible for leading the team who had written up the regulation, identified a series of benefits to the EU’s SME and startup ecosystem.
Such included, he said, the fact that the measures would provide more of a harmonized framework for firms to operate across the EU rather than having to contend with different rules across the bloc. Gross also noted how consumer trust will be increased in AI products through the use of clearer quality control mechanisms, at a time in which confidence in the take-up of the technologies remains hindered.
More generally, Gross said how it was the provision for the establishment of ‘regulatory sandboxes’ in the regulation that would allow startups to continue to innovate by allowing for a controlled space by which startups can test out their AI technologies, in collaboration with the relevant authorities.
“We tried to help our startups because we know that a lot of innovation comes from small companies who are very creative,” Gross said. “The Regulatory Sandboxes will allow you to develop your AI system with the support of your data protection supervisory authority.”
“This should help you to have an easy ride once you come to the end of the conformity assessment.”
Moreover, in terms of the ability of firms to take advantage of particular datasets as part of testing procedures in regulatory sandboxes, Article 54 of the regulation lays down a provision for the further processing of personal data for developing certain AI systems in the public interest.
In terms of conformity assessment fees themselves, it’s relevant to note that Article 55 of the regulation stipulates that “specific interests and needs of the small-scale providers shall be taken into account,” and that such fees should be reduced “proportionately to their size and market size,” in such assessment procedures, which will be carried out by notifying authorities designated in each EU member state.
For their part, the EU’s SME ecosystem welcomed the Commission’s introduction of regulatory sandboxes into the text, with the European Digital SME alliance saying that the move would “allow smaller businesses to experiment and innovate with AI without fear of reproach.”
The Allied For Startups association supported the inclusion of regulatory sandboxes too, but added that in a post-Covid economy, ensuring open markets for innovation would become ever the more necessary.
“As the economy recovers post-COVID, policy makers should design AI rules that attract more entrepreneurs to launch an AI startup in Europe,” Benedikt Blomeyer, EU Policy Director at Allied for Startups said.
For Europe’s smaller companies generally, the Commission’s Gross hopes that the proposed index of risk and therefore trustworthiness will help to foster innovation in new technology markets.
“If you have a common label of being trustworthy, it will be easier to compete with bigger firms,” Gross said. “Startups will be able to compete on an equal footing because while products from larger companies may be fancier, they will not be more trustworthy.”
As a means to garner public input on the published proposals, the EU executive has now opened up a consultation which it hopes will feed into the ongoing legislative debate. The feedback period is open until 24 June.
Featured image credit: David Iskander / Unsplash Photo: Christian Lue / Unsplash
Would you like to write the first comment?
Login to post comments