With Artificial Intelligence being a primary catalyst in the rapid digitalisation of countless aspects of our daily lives, in early December of last year AI security and privacy experts, policy-makers, cybersecurity agencies, EU institutions, standards bodies, and key industry players met to discuss digital legislation to regulate emerging technologies.
These experts shared their learnings and views and presented both potential positives and potential negatives of Artificial Intelligence, as well as a documented series of actionable insights on the security and privacy aspects of AI.
In cooperation with our partners Huawei, we've made available a full report of these discussions available below.
Data has been described as the oxygen that is fuelling the digital transformation and Artificial Intelligence (AI) is a key driver for a transformation that all organisations, large and small, now face.
AI can increase productivity and power the digital and green twin transition.
One spin-off from the Covid-19 pandemic has been a sea of change in digital transformation affecting all sectors. Industries from pharmaceuticals to manufacturing, and services have begun to build an enabling foundation for the digital society.
The European Union (EU) and the Member States are now looking to develop new regulations to guide this journey.
But the growing reliance on AI-powered tools has, at the same time, raised questions about their reliability, trustworthiness and accountability.
In April 2021, the European Commission attempted to address some of these concerns when it presented draft legislative regulations. The aim is to offer AI developers, implementers, and users more clarity and ensure a greater take-up of AI across the EU.
The Commission has prioritised legislation to regulate emerging technologies, including AI through the Artificial Intelligence Act and the upcoming Data Act, and facilitating the creation of European data spaces in specific sectors such as health and transportation. The draft legislation follows a New Legislative Framework (NFL) regime whereby manufacturers need to run a conformity assessment fulfilling certain essential requirements in terms of accuracy, robustness and cybersecurity.
So, why the need for multi-stakeholder dialogue?
With such a fast-changing data landscape and upcoming legislation, there is an urgent need to foster inclusive stakeholder dialogue and venues have already swiftly emerged for collaboration between industry practitioners and standard-setters.
To foster such dialogue around all things AI, Huawei supported the Foundation Forum 2021, an initiative focused on developing an AI assurance framework and common standards for AI security and privacy across the supply chain.
The Foundation Forum, chaired by Global Digital Foundation, is an opportunity to bring regulators, standard-setters, and innovators together to engage in such a dialogue on AI standards, a necessary exercise in order to build bridges between these different worlds.