Brian Sims
Editor
Brian Sims
Editor
ARTIFICIAL INTELLIGENCE (AI) technology is being adopted rapidly across European organisations, but many have deployed it without the governance and safety infrastructure to match. That’s according to new research from ISACA, the global professional association for digital trust professionals.
Drawn from an advance release of selected questions from ISACA's 2026 AI Pulse Poll based on responses from digital trust professionals in Europe, the findings point towards a significant and widening gap between AI adoption and organisational readiness to manage the risks it brings.
When asked how quickly their organisation could halt an AI system in the event of a security incident, almost three-fifths (59%) of respondents said they don’t know. Only one-fifth (21%) said they could do so within half an hour, suggesting that for the majority of organisations, a compromised or malfunctioning AI system could continue to operate unchecked for a period of more than half an hour.
The research findings raise questions about operational preparedness at a time when AI systems are increasingly embedded in core business processes. The absence of clear response procedures has direct implications for regulatory exposure, reputational risk and the continuity of the processes and services these systems support.
Understanding the problem
Beyond the ability to stop an AI system, the research also points to significant gaps in organisations’ capacity to understand and account for what happened when there’s a failure. Fewer than half (42%) of respondents express confidence in their organisation’s ability to investigate and explain a serious AI incident to leadership or regulators, while only 11% are completely confident.
This is particularly significant as regulation begins to come into force. The European Union AI Act, which is now moving into enforcement, places explicit requirements around explainability and accountability. These obligations demand not only technical controls, but also governance structures, audit trails and – most importantly – professionals with the skills to interpret and communicate the behaviour of AI systems. ISACA’s research suggests those capabilities are not yet in place at scale.
These findings point to a deeper structural issue. One-third of organisations (33%) don’t require their employees to disclose when AI has been used in work products, leaving significant gaps in visibility over where and how AI is being employed across the business.
A further 20% of respondents don’t know who would be ultimately accountable if an AI system caused harm, with only 38% identifying the Board or an executive. This revelation is at odds with the direction of travel in regulation, which is largely focused on placing accountability at senior leadership level.
On the surface, the oversight picture offers some reassurance. 40% of respondents say humans approve most AI-generated actions before execution, while a further 26% review decisions after the fact. However, without the broader governance infrastructure to support it, human oversight alone may not be sufficient to identify or address problems before they escalate.
The data suggests that many organisations continue to treat AI risk as a technology issue rather than an enterprise-wide governance challenge. This is not sustainable, particularly so at a time when AI is increasingly shaping decisions, outputs and customer interactions across every part of the business.
Desire to govern change
Chris Dimitriadis, chief global strategy officer at ISACA, explained: “What this research reflects is that our thirst to innovate is not matched by our desire to govern change, in turn exposing us to critical risks. The tools to govern AI responsibly already exist. Risk management, prevention controls, detection mechanisms, incident response and recovery strategies are the foundations of good cyber security practice and they need to be applied to AI with the same rigour and urgency.”
Dimitriadis continued: “The gap between deployment and governance is not closing. Rather, it’s growing. Organisations need to act quickly. That process begins by establishing who’s accountable, building the incident response capability and creating the visibility over AI use through audit to foster a culture of meaningful oversight.”
In addition, Dimitriadis noted: “Truly closing the gap cannot be done by process changes alone. Rather, it will require professionals who have the expertise to evaluate AI risk rigorously, embed oversight across the full lifecycle and translate that into decisions that stand up to Board and regulatory scrutiny. The organisations that have this right are those that focus on customer and overall stakeholder trust and those that will lead through sustainable innovation.”
*Further information is available online at www.isaca.org
Western Business Media Limited
Dorset House
64 High Street
East Grinstead
RH19 3DE
UNITED KINGDOM