When inventors set their minds to creating the first automobiles their aim was simple – to build horseless carriages. Little consideration was given to the safety of passengers or pedestrians, the need for more and wider roads, or the consequences of extra pollution and private car ownership.
Today, the world is grappling with a technology that has the potential to be as revolutionary as the internal combustion engine: artificial intelligence.
The difference this time is that we can ensure AI is designed responsibly with full consideration given to the likely economic, social, and environmental impacts.
Yet, research shows there is a gap between the current, default AI agenda – which seeks technological progress above all else – and the needs of individuals, societies, and our planet. When AI developers have rushed to push technological boundaries, without properly weighing up ethical and social concerns, it has often led to AI applications entrenching discrimination and bias and the publication of false information.
However, we are still at the dawn of AI and have influence over how this technology can be created. With the EU AI Act passing through the European Parliament this summer and the UK gearing up to host its AI Safety Summit in early November, it seems we have arrived at an AI crossroads. Now is our chance to set global standards that ensure we can realize the benefits of AI while mitigating the risks and taking control over how AI will impact our daily lives.
So, how can we address societal concerns around AI and ensure that it is built responsibly by design?
Reframe how AI success is measured
We need to start by rethinking what successful AI actually means. It should be measured not in the technological sophistication it reaches, but rather by the societal and environmental gains it creates, guided by ethical AI principles.
However, with ethics rooted in human judgement and subjectivity, measuring the success of AI that is “ethical” is inherently challenging.
That’s why establishing responsible AI principles, such as those we have outlined at Nokia, provides a great starting point to develop ethical AI metrics and KPIs. Collectively, we should shift our mindset to focus foremost on the positive benefits AI can unlock, such as human well-being or decarbonization.
Implement global AI standards
At Nokia, our view is that global standards are vital if we are to use AI in a human-centric, trustworthy and ethical way. Global frameworks are needed for the assessment of compliance and to assist providers and users of AI systems in complying with regulatory requirements.
A globally harmonized regulatory environment boosts innovation and brings the benefits of AI to everyone everywhere. Conversely, a fragmented approach will have an inhibitive effect.
Ensure AI is ethical by design
With the rise of ‘no code/low code’ tools, which provide access to AI for non-professional developers, an increasing number of businesses and people have the capacity to use AI without a clear understanding of the risks or the capability to mitigate them. Even for professional developers, too often the responsible use of AI is an afterthought.
This is why an ‘ethical by design’ approach is key, so that consideration of AI ethics is included throughout the development lifecycle of a given AI technology.
For example, at Nokia Bell Labs, Nokia’s industrial research lab, we are studying the use of tools to prompt AI developers into thinking about ethical aspects at each step of the process, so that they are embedded in decision-making and implementation from the outset.
Promote diversity in AI
Finally, the field of AI is facing a diversity crisis. If this isn’t addressed, the unconscious biases of AI creators and users will continue to embed themselves into the resulting technologies, excluding entire swathes of the global population.
For example, our recent study revealed that AI datasets currently show a disproportionate emphasis on Western populations. This means AI results may be biased and/or exclude marginalized populations due to a lack of sufficient social, emotional and cultural knowledge. Collecting data from under-represented populations is key to obtaining an inclusive worldview.
Employers must prioritize diversifying their talent pool in terms of ethnicity, gender, able-bodiedness and more, so that those working in AI better represent society.
Ultimately, we need to change the way we measure the value of any new technology. We need to look beyond merely technical capabilities. Of course, the traditional metrics of performance, capacity, efficiency, reliability and security matter, but so do environmental, social, and governance metrics.
AI has the potential for enormous good, and may unlock new solutions to global challenges, but only if it is developed responsibly and for the benefit of all.
Lead image via Nokia Bell Labs.
Would you like to write the first comment?
Login to post comments