Partnered with

ISACA warns enterprises to be aware of AI use in applications

Insights from the ISACA Digital Trust World conference on AI risks and the need for comprehensive policies and training.
ISACA warns enterprises to be aware of AI use in applications

The words on everyone’s lips at the recent ISACA Digital Trust World conference were Artificial Intelligence (AI). Digital risk enthusiasts from around the globe gathered in Dublin to discuss how to get on the front foot of AI and prevent bad actors from using the emerging deep tech as a tool of deception. 

In research launched exclusively at the conference by ISACA, 99 percent of European business and IT professionals say they are worried, to some extent, about the potential exploitation of generative AI by bad actors. The findings came from an AI Pulse Poll in which 334 business and IT professionals working in Europe took part. 

However, despite the widespread fears of such potential for bad actors, only 28 percent of

respondents perceive AI-related risks as an immediate priority. A mere ten percent of organisations have formal, comprehensive policies in place governing the use of AI technology, with 29 percent having no policy, and no plans to implement one at all.

The pursuit of digital trust 

ISACA has been educating, training and certifying individuals and organisations in their pursuit of digital trust for over 50 years. Its 170,000 global members, 30,000 of which are in Europe, work in digital trust fields such as information security, governance, assurance, risk, privacy and quality across 188 countries, with 225 chapters worldwide.

“If you want to see passion, you get in front of ISACA’s community  – with our chapter leaders, you see a group of people who are very much committed to the cause. Their whole world revolves around it, and it's pretty special, I haven’t seen it anywhere else before,” ISACA CEO, Erik Prusch, told me as I was asking him how he’s settling into his new role. 

Now we have a situation where AI has bolted out of the stable door before privacy, security and policy even had a chance to close in on it. How do we try to get out in front of this beast? Reactions from the survey showed that 74 percent believe that cybercriminals are harnessing AI with equal or even greater success than digital trust professionals yet just seven percent of organisations are providing all employees with AI training. 

Identifying the risks

“The problem is people don’t know how artificial intelligence is embedded within the enterprise – there are a lot of applications right now that are leveraging artificial intelligence that enterprises are engaging in - they don’t realise their data is leaving the firewall, going into a server and then coming back,” said Prusch.

“That poses risks that we have never identified before - until we can get our arms wrapped around it, we should be worried,” he continued.

“What is that data going to be used for if it's hacked? Should we be concerned that the quality of data we are gaining back is correct, accurate, and true and whether we are introducing new threats into enterprise that we have never seen before? Businesses should be asking themselves these questions to get ahead,” he said. 

It’s all about training

Although he has pinpointed the issues that AI might cause for enterprises if it’s not properly managed, Prusch is confident that ISACA has the answers. 

“The good news about ISACA, and our experts, is we are trained to understand these things. We are trained to assess the risk elements of an organisation and perform an audit and ensure the checks are in place,” says Prusch. 

“We are working to get in front of Governments so that they understand the risks that are there so that they create the safeguards,” he explains.

How business and IT professionals will use AI in the workplace was a subject of the survey with one in ten agreeing their job responsibilities within their organisations have already increased due to advancements in generative AI, with four in five professionals agreeing that many jobs will be modified by AI in the next five years. 

“Business and IT professionals are aware of the potential positive impact of AI, but to reap the benefits they must ensure the staff in their organisations are trained on how to use AI effectively and safely. By providing comprehensive training as part of an overarching AI strategy, businesses can stay ahead of the curve and ensure the safety and security of their operations while promoting long-term business success,” concludes Prusch. 

Lead image via ISACA. Photo: Uncredited.

Follow the developments in the technology world. What would you like us to deliver to you?
Your subscription registration has been successfully created.