The widespread adoption of technology presents us with a paradox. While it has made us successful as a species, it is fraught with obstacles, both real and imagined. Today, AI-enabled machines lie squarely at the heart of this paradox. We’re forced to question how we can retain control over things that can now perform traditional human tasks far better than us. We also need to find ways to avoid fixating on negative yet unlikely outcomes rather than those that are more realistic.
There is masses of data to support the fact that autonomous vehicles are safer than those driven by humans in controlled environments, but we still choose to focus on the worst possible outcomes.
The power of machines
The reasons behind this are complex. For instance, we’re poor at determining how trustworthy a technology is and when confronted with a choice about whether or not to trust something, we typically resort to absolutes – yes or no. What’s more, AI systems and machines lack essential human attributes such as empathy, emotional understanding, and contextual decision-making. This is both their superpower and weakness. Humans can incorporate emotions and complex contexts in their choices; AI algorithms must rely on predefined rules and patterns.
The issues with this can be seen in the medical field. Although AI-powered systems can analyse medical data and assist in diagnosing diseases with greater accuracy than humans, we tend to question whether these ‘robot doctors’ lack the empathy and compassion that human doctors bring to patient interactions, thereby impacting the overall quality of care. What began as the automation of data analysis becomes a discussion about the level of power we want to invest in machines.
A different level of accountability
In addition, humans typically overestimate their own abilities. Part of the problem is that we’ve never found an ideal way of measuring human effectiveness at a wide range of tasks, so we have no benchmark for measuring how a machine performs in comparison. If we build a model with an accuracy of 0.84, for example, it will be right 84 percent of the time. But we’re more likely to spend time discussing how to get it closer to 1.0 than comparing its performance to a human in the same conditions. Indeed, that human’s performance may be much lower than 0.84, but we often fail to recognise our own fallibility.
Furthermore, the lack of a moral compass in AI systems highlights the need for human oversight and ethical considerations to ensure they’re deployed and managed responsibly. Without a fundamental understanding of morality, AI will struggle to navigate complex dilemmas. Although self-driving cars could offer benefits such as improved safety due to the removal of human driver error, they face challenging ethical decisions when faced with unavoidable accidents. Who should the car prioritise? Passengers? Pedestrians? Or both?
Holding to comparable standards
We can forgive a human for a genuine mistake, but if a machine makes an error, it is inevitably held to a different level of accountability – and this leaves users questioning the reliability and effectiveness of AI systems.
For AI to be trusted and held to standards more comparable to humans, it must be able to interface with the methods humans have evolved to build trust. Explanations and justifications, updating predictions in the face of error, and learning the individual contexts of different users are all key components to this. It is only by considering human factors and end-user experience as well as model accuracy that we can truly build trust and realise the full potential of these new technologies.