Brian Sims
Editor

“Europe’s AI security controls trail global benchmarks” states Kiteworks

KITEWORKS – THE company empowering organisations to effectively manage risk in every send, share, receive and use of private data – has just released its Data Security and Compliance Risk: 2026 Forecast Report. The comprehensive analysis reveals that European organisations trail global benchmarks on the security controls needed to detect Artificial Intelligence (AI)-specific threats, respond to AI-enabled breaches and govern AI data flows.

Based on a survey of security, IT, compliance and risk leaders across ten industries and eight regions, the research exposes a widening gap between Europe’s regulatory leadership, with regulations such as the European Union (EU) AI Act, and its actual AI security posture.

European organisations trail on AI anomaly detection (France 32%, Germany 35% and the UK 37% versus 40% globally), training data recovery (40% to 45% versus 47% globally) and Software Bill of Materials (SBoM) visibility for AI components (20% to 25% versus 45%-plus in leading regions).

When AI systems behave unexpectedly – or when AI-enabled attacks target European infrastructure – most organisations lack the detection capabilities to identify the threat. This can result in compliance fines and negative brand exposure as well as breaches involving sensitive data.

Security gap

“Europe has led the world on AI governance frameworks,” asserted Wouter Klinkhamer, general manager of EMEA strategy and operations at Kiteworks, “with the EU AI Act setting the global standard for responsible AI deployment. Governance without security, though, is incomplete.”

Klinkhamer continued: “When an AI model starts behaving anomalously by, for example, accessing data outside of its scope, producing outputs that suggest compromise or otherwise failing in ways that expose sensitive information. European organisations are far less equipped than their global counterparts to detect it. That’s not a compliance gap. Rather, it’s a security gap.”

The report identifies six predictions for European organisations in 2026:

AI-specific breach detection will lag other regions

France (32%), Germany (35%) and the UK (37%) all trail the 40% global benchmark on AI anomaly detection (ie the capability to identify when AI models behave unexpectedly). When AI-enabled attacks exploit model vulnerabilities or AI systems access data outside of their intended scope, European organisations will be slower to detect the breach, exacerbating the detrimental impact of the exposure.

AI incident response will remain incomplete

Training data recovery (the ability to diagnose AI failures by examining what the model learned from) sits at 40% to 45% across Europe versus 47% globally and 57% in Australia. Without this capability, organisations cannot forensically analyse AI incidents or prove what went wrong to regulators.

AI supply chain visibility will remain a blind spot

SBoM adoption for AI components sits at 20% to 25% across Europe versus 45%-plus in leading regions. Organisations cannot secure AI models built on third party components they’re not able to see. As attackers increasingly target vulnerabilities in AI libraries, data sets and frameworks, this visibility gap stops being a compliance checkbox and becomes an open door. Organisations without component inventories cannot detect exposure, trace compromise origins or respond until the damage is already done.

Third party AI vendor incidents will catch organisations unprepared

Only 4% of French organisations and 9% of UK organisations have joint incident response playbooks with their AI vendors. When a vendor’s AI system is compromised – and that compromise flows into European infrastructure – organisations will not have the detection mechanisms, communication channels or containment protocols in place. The breach spreads before they know it even exists. 

AI governance evidence will remain manually generated

European organisations cluster in ‘continuous, but manual’ compliance rather than automated evidence generation. This creates dual financial exposure. Regulators assessing fines will find documentation that’s slow to produce and inconsistent in quality, while insurers adjudicating breach claims may deny coverage entirely if organisations cannot demonstrate adequate AI governance controls were in place. Governance then becomes a payout gap. 

AI incident response will remain incomplete

Training data recovery (the ability to diagnose AI failures by examining what the model learned from) sits at 40% to 45% across Europe versus 47% globally and 57% in Australia. Without this capability, the risk window becomes more severe, while in parallel compliance exposure becomes more difficult, with organisations unable to forensically analyse AI incidents or prove what went wrong to regulators.

The implications extend beyond compliance. AI systems are increasingly processing sensitive data, making autonomous decisions and integrating with critical infrastructure. Every AI model that cannot be monitored for anomalies is a system where adversarial inputs, data poisoning or model manipulation go undetected. Every third party AI component that cannot be tracked is a dependency where upstream compromises silently inherit into your environment. Every AI vendor relationship without a joint incident playbook is a breach that spreads unchecked across organisational boundaries.

Attack surfaces

These are not governance failures waiting for a regulatory audit. They’re attack surfaces waiting for an adversary. Compliance gaps carry the abstract risk of penalties. Security gaps carry the concrete certainty of compromise: data exfiltration, manipulated outputs and operational disruption. The difference is somewhere between a fine you can budget for and a breach you cannot predict.

The global report, which includes 15 predictions across data visibility, AI governance, third party risk and compliance automation, identifies ‘keystone capabilities’ (ie unified audit trails and training data recovery) that predict success across all other security metrics, thereby showing a measurable advantage for organisations that have implemented them.

“The EU AI Act establishes what responsible AI governance looks like,” stated Klinkhamer. “The question for European organisations is whether they can secure what they’re governing. By end of 2026, the organisations that have closed the gap between AI policy and AI security through anomaly detection, training data recovery, supply chain visibility and vendor incident co-ordination will be positioned for both compliance and resilience. Those still running AI workloads without detection capabilities will learn about their security gaps the hard way: from attackers, not auditors.”

*Download copies of the Data Security and Compliance Risk: 2026 Forecast Report here

Company Info

Western Business Media Limited

Dorset House
64 High Street
East Grinstead
RH19 3DE
UNITED KINGDOM

Login / Sign up