Brian Sims
Editor

GenAI-related security incidents impact 97% of end users in 2024

ACCORDING TO the latest study results from the Capgemini Research Institute, 97% of organisations using generative Artificial Intelligence (AI) have faced security incidents or data breaches linked to the technology at some point in 2024.

As generative AI becomes more integrated into business operations, over half of these companies (52%) have reported financial losses exceeding £40 million, prompting 62% of them to request larger budgets for risk mitigation.

The surge in cyber threats – including data poisoning, deepfakes and data leakage – is worsened by employee misuse and highlights the growing vulnerabilities introduced by generative AI.

While AI expands the cyber attack surface, it also offers significant benefits. 60% of those companies surveyed consider AI crucial for faster and more accurate threat detection and response. Over half (58%, in fact) also believe that AI will strengthen proactive defence strategies, allowing cyber security teams to focus on combating complex threats.

Despite these advancements, generative AI remains contentious. It empowers security teams with tools to detect and mitigate risks more efficiently, but also introduces new challenges that require continuous monitoring, ethical guidelines and robust employee training.

Transforming the landscape

Reacting to the study findings, Andy Ward (senior vice-president of Absolute Security) commented: “AI is transforming the cyber security landscape, being used for both attacks and defence as a double-edged sword. While AI’s capabilities can detect threats faster and enable proactive defences, the fact that almost all of those organisations using generative AI have reported security incidents highlights the urgent need to bolster cyber resilience.”

Ward continued: “These AI-powered attacks typically come in the form of generative AI-generated phishing attacks or AI-driven malware, but increasingly cyber criminals are also using AI to identify lucrative targets for ransomware attacks and to personalise the ransom demands in search of larger pay outs.”

He added: “54% of Chief Information Security Officers feel ill-prepared to handle AI-powered threats, with AI expanding attack surface and introducing new vulnerabilities. In order to minimise the risks of AI-driven threats, organisations must implement robust network visibility to monitor devices and applications for suspicious activity.”

Twice as resilient

Earlier this year, Microsoft researchers found that UK organisations using AI tools for cyber security are twice as resilient to attacks as those who don’t. The study also concluded that boosting AI adoption could save the UK economy £52 billion annually, down from the £87 billion lost to cyber attacks each year.

Marco Pereira, global head of cyber security for cloud infrastructure services at Capgemini, observed: “The use of AI and GenAI has so far proven to be something of a double-edged sword. While it introduces unprecedented risks, organisations are increasingly relying on AI for faster and more accurate detection of cyber incidents.”

Pereira went on to state: “AI and GenAI provide security teams with powerful new tools to mitigate these incidents and transform their defence strategies. To ensure they represent a net advantage in the face of evolving threat sophistication, organisations must maintain and prioritise continuous monitoring of the security landscape, build the necessary data management infrastructure, frameworks and ethical guidelines for AI adoption and also establish robust employee training and awareness programmes.”

Company Info

WBM

64 High Street, RH19 3DE
EAST GRINSTEAD
RH19 3DE
UNITED KINGDOM

03227 14

Login / Sign up