With the deluge of AI-driven tech, the recognise of its potential impact on people becomes ever more vital.
Humanising Autonomy is a software company focused on human-machine interactions, primarily where visual-based AI is applied to facilitate better experiences for people and improved safety. This covers everything from vehicle dashcams to cameras in public spaces and within supply chain settings.
I spoke to CEO and co-founder Maya Pindeus to find out more.
The company is currently working with infrastructure services providers in Columbia to deploy its camera-vision data interpretation technology to cities that can use the data in local decision-making. This means helping understand patterns in behaviour to help with risk assessment around road safety, near-misses and incident reconstruction.
But it's also about better understanding how people move around to create better infrastructure and experience for city dwellers and visitors.
You're not wrong if you think this sounds a bit like smart cities. But while we've seen a minefield of pilots and projects that failed to expand many smart city applications beyond city centre precincts, Humanising Autonomy has found a way to scale connected cities.
Pindeus explains that Humanising Autonomy power is that it embeds its software in existing physical infrastructure.
According to Pindeus:
"My issue with smart cities is that it brings up an expectation that we build and create a whole new infrastructure. It's really pricey and really costly.
With Humanising Autonomy, we only need a camera video feed to interpret and predict behaviour.
We partner with companies that provide city services. It's about how you feed into the immediate priorities of a city, which is often about making operations more efficient while making it easy for cities to apply your technology by leveraging what is already there.
Otherwise you end up in the pilot bucket when you start needing expensive devices, cameras, and sensors. "
The importance of ethics
Behind the company is a commitment to ethical computer vision. In practice, Pindeus explains that this starts with determining "how can you make a system explainable and interpretable? How can you understand decision-making? This is the opposite of Chat GPT, where you take the internet, train a language model and then see what comes out."
Ethical computer vision is also about trust, which Pindeus believes can only come from the understandability of a decision-making system and how the training data is used.
"Rather than massive data sets, have smaller ones you can control. AI should have some constraints as a highly specialised tool rather than an all-encompassing thing around us."
She asserts that this is as much about what you don't do, avoiding scenarios where it would be difficult to use AI ethically "so that we choose applications that we believe will benefit society."
Pindeus recalls that when Humanising Autonomy began around 2016:
"Everyone was talking about AI in the lens of industry 4.0 and autonomous vehicles. It seemed like everything was done and ready, but no one even mentioned people back then. It was very weird to me."
She believes you need an entity that takes human behaviour and human-centricity to build AI "because otherwise, what's the point on a meta-level?
"What's the point if we don't consider the human perspective? Consider the entireity of machines ranging from a hairdryer to self-checkouts and your car, if they don't understand us, then it will be just a terrible experience."
The EU AI Act is a balancing act
We spoke briefly about The EU Artificial Intelligence Act. Pindeus believes regional R&D funding is critical to ensure that AI is created in line with the morals and ethical standards of the European Union.
"A lot of development is happening in countries like the US and China. But to be successful and competitive, we also need to develop here. And that raises the issue of public funding for AI development. It's not on par with other places. A lot of development is coming from private companies, which is not in the interest of our society."
However, Pineus contends that the AI Act is promising, particularly looking at the risk levels of difficult applications and how companies can use AI to manipulate.
Many industries have accepted heavy regulations,such as financial services, banking, and healthcare. This can also be the case for AI provided companies (particularly young companies) can develop without onerous and overly bureaucratic regulations.
She considers the public conservations occurring right now around AI as critically important because "the goal for Humanising Autonomy has always been to be applied essentially as a mark of approval of trust, built within AI systems within machines.
While the company largely focuses on the urban environment, ethical computer vision can be applied to everything from smart homes to augmented reality.
"For me, it's very much about making sure we solidify the foundation that we have built in the mobility and smart spaces sector, but also make sure that we keep developing and we keep creating this discourse to be able to apply trustworthy ethical AI to more intimate spaces right and more everyday environments."
To continue the conversation around ethical AI, check out the program at our Tech.eu Summit this week, where industry leaders like Maya Pindeus will provide valuable food for thought.
Lead image: Maxim Hopman.
Would you like to write the first comment?
Login to post comments