Innovators, lawmakers, and standard-setters laid the foundation for future stakeholder dialogue on how the upcoming AI regulation should result in innovation and technological uptake.
Artificial Intelligence is a rapidly evolving technology increasingly embedded in a variety of fields. As the EU’s AI Act is moving its first steps in the legislative process, stakeholders gathered in the Foundation Forum last week anticipated the main challenges from an industry perspective.
The European Commission presented a proposal for an AI regulation in April, but ever since the progress has been slow, with the European Parliament only deciding two weeks ago who should take the lead on the file and the EU Council that has advanced only on very few articles.
“Technology's not going to wait. Europe’s competitors are not going to wait,” warned John Suffolk, president at Global Cyber Security and privacy officer at Huawei, noting how AI applications are getting more and more common in everyday lives.
MEP Tsvetelina Penkova admitted that “technologies tend to adapt and move faster than the regulator does.” However, she also pushed back on frequent criticism that policymakers would excessively focus on negative sides of new technologies, noting that it is their job to anticipate pitfalls and prevent harm to businesses and consumers.
How to do so in a way that is compatible with the needs of the innovators and businesses working with an emerging technology requires extensive consultations with all the stakeholders involved.
The market seems already to be moving ahead, establishing platforms for multi-stakeholder dialogues such as the task force on AI and cybersecurity led by the think tank CEPS. Lorenzo Pupillo, who heads the task force, explained that the participants were brought together by shared concerns on the technical, ethical, market, and governance challenges.
“AI applied to cybersecurity is a double-edged sword,” Pupillo said, making the example of automated responses to cyberattacks raise the question of who is responsible for the counter-attack.
While the AI regulation is expected to introduce hard rules on how AI systems should be developed and used, industrial players are already putting their heads together on the development of common standards that will determine how these general principles would work in practice.
“If we consider the AI Act as the legislative part of the regulation, then the executive arm of it should be harmonized, horizontal standards,” emphasised Daniel Loevenich of IT consulting BSI GISA.
How to make a system accurate, fair or trustworthy are all requirements that will need to be operationalised at the technical level to ensure compliance.
For Loevenich there should be two types of standards: an evaluation criteria catalogue concerning all kinds of conformity assessment and the corresponding methodology catalogue.
In any case, these standards need to be extendable and flexible for the progress of AI techniques.
“Standards are living documents that co-evolve with the technology, they don't just appear and disappear at 2, 3, 4 years periods. The evolution of the technology can be supported by engaging early with standardization and giving you that interoperability at a global scale,” said Ray Walshe, director at EU Standards Observatory.
While regulation is often considered by the industry as a bureaucratic obstacle to the development of new technologies, harmonized standards can support the adoption of emerging technologies by allowing different providers to interoperate and collaborate.
“Standards are where the forces of research and innovation and markets meet,” George Sharkov, vice-chair of standardization organization ETSI told the Forum, making the case that with horizontal standards new ideas and research can quickly enter the market.
The telecommunication market is a classical example of a win-win situation where providers came together to define a common rulebook that allowed them to sell their services abroad.
However, the founder of Digital Platform Governance, Thorsten Jelinek, cautioned that there are signs of regressions in this regard, as world regulatory powers have been moving towards closed systems.
“In this so-called balkanization, standards are not just something where you take the best practices, you put it into the system to create efficiencies. It's also to close off a community and to impose your advantage. That's the risk we are seeing,” Jelinek said.
These protectionist approaches are related to the growing interrelation between technological mastery and political sovereignty and have found one of their most frequent manifestations in data localization policies from major jurisdictions trying to control the flow of data.
As AI systems are trained based on datasets, data-related standards and governance structures are key components for the development of this emerging technology.
According to a survey of the European Institute of Innovation and Technology, AI experts spend 80% of their time collecting and preparing data, and half of them believe that the lack of data is a barrier for deploying AI.
Regulatory compliance, in particular with the GDPR, the EU’s data protection law, are strong limitations for data sharing practices in Europe. The Data Governance Act (DGA), which was recently adopted by the EU institutions, is meant to provide more legal clarity in this sense to allow all actors to reap the benefit of the data economy.
“It's very important that SMEs and especially those that work with AI have secure processes to handle data in a very careful, compliant and consistent way during the whole data lifecycle,” stressed Antonio La Marra, CEO at Security Forge.
For Jelinek, the DGA is providing momentum for sharing industrial data, a strong point of the European economy compared to information-based services.
However, for what concerns personal data there are some significant tensions between the EU privacy law and AI development, according to Bojana Bellamy, president of the Centre for Information Policy Leadership.
She pointed in particular to the fact that one of the core principles of GDPR is data minimisations, whereas AI systems work on the opposite principle, more data usually means the algorithm is more accurate and fairer, as in small datasets the risk of bias is higher.
Another trade-off Bellamy highlighted is between explainability and robustness, as more complex systems tend to be more accurate but they are inevitably harder to explain. In her view, these conflicting principles can only be solved by looking holistically at all the legislation related to the digital field, keeping an outcome-oriented approach.
“It's really important that as we consider AI applications, we don't only look at the risks but also look at the benefits and what are we trying to achieve,” said Bellamy. “If we do not deploy AI, what are we going to lose? What is the reticence risk?”
Would you like to write the first comment?
Login to post comments