Brian Sims
Editor

“Businesses ‘sleepwalking’ into AI governance crisis” asserts BSI research

AN ARTIFICIAL Intelligence (AI) “governance gap” is emerging as businesses pour money into AI tools and products without oversight or protective processes being put in place. While business leaders are chasing productivity boosts and cost reductions by investing large sums in AI, new evidence uncovered through research conducted by the British Standards Institution (BSI) suggests many are sleepwalking towards significant governance failures.

The global study – combining an AI-assisted analysis of over 100 annual reports from multinationals and two global polls of over 850 senior business leaders, conducted six months apart – offers a comprehensive overview of how AI is publicly framed in communications alongside executive-level insights into its implementation.

62% of business leaders expect to increase their investment in AI in the next year and, when asked why, the majority citied boosting productivity and efficiency (61%), with half of respondents (49%) focused on reducing costs. A majority (59%) now consider AI to be crucial for their organisation’s growth, thereby highlighting the integral role executives believe AI will play in the future success of their businesses.

Highlighting the striking absence of safeguards, less than a quarter (24%) of respondents reported that their organisation has an AI governance programme in place, although this rose modestly to just over one-third (34%) in large enterprises (a pattern repeated across the research).

While nearly half (47%) suggest that AI use is controlled by formal processes (up from 15% in February 2025), only one-third (34%) report using voluntary Codes of Practice (up from 19%). Only a quarter (24%) report that employee use of AI tools is monitored and just 30% have processes in place to assess the risks introduced by AI and the required mitigations. Just one-in-five businesses (22%) restrict employees from using unauthorised AI.

The AI-assisted analysis reinforced this emerging governance gap and also identified a second geographical one. Keyword analysis shows that governance and regulation are more central themes to reports produced by UK-based companies, appearing 80% more frequently than in reports from companies based in India and 73% more than those located in China.

A key component of the governance and management of AI lies in how data is being collected, stored and used to train large language models, yet only 28% of business leaders know what sources of data their business uses to train or deploy its AI tools, which is down from 35% in February. Just two-fifths (40%) said their business has clear processes in place around the use of confidential data for AI training.

Gap must be addressed

Susan Taylor Martin, CEO at the BSI, said: “The business community is steadily building up its understanding of the enormous potential of AI, but the governance gap is concerning and must be addressed. While it can be a force for good, AI will not be a panacea for sluggish growth, low productivity and high costs without strategic oversight and clear guardrails. Indeed, without this being in place, new risks to businesses could emerge.”

Taylor Martin added: “Divergence in approaches between organisations and markets creates real risks of harmful applications. Overconfidence, coupled with fragmented and inconsistent governance approaches, risks leaving many organisations vulnerable to avoidable failures and reputational damage. It’s imperative that businesses move beyond reactive compliance to proactive and comprehensive AI governance.”

Nearly one-third of executives (32%) feel that AI has been a source of risk or weakness for their business, with just one-in-three (33%) having a standardised process for employees to follow when introducing new AI tools.

Capability in managing these risks appears to be declining, with only 49% of respondents noting that their organisation includes AI-related risks within broader compliance obligations, which is down from 60% in the last six months. Just 30% reported having a formal risk assessment process to evaluate where AI may be introducing new vulnerabilities.

In their annual reports, financial services organisations placed the highest emphasis on AI-related risk and security (25% more focus than the next highest, ie the built environment). Financial services firms particularly highlighted the cyber security risks associated with implementing AI, likely reflecting traditional consumer protection responsibilities and the reputational consequences of security breaches.

In contrast, technology and transport companies placed significantly less emphasis on this theme, in turn raising questions about sectoral divergence in governance approaches.

Errors and value

There’s also limited focus on what happens if AI goes wrong. Just one-third of respondents say their organisation has a process for logging where issues arise or flagging concerns or inaccuracies with AI tools such that they can be addressed (32%), while only three-in-ten (29%) cite having a process for managing AI incidents and ensuring timely response. Around one-fifth (18%) feel that if generative AI tools were unavailable for a period of time, their business could not continue operating.

More than two-fifths (43%) of business leaders state that AI investment has taken resources that could have been used on other projects. Yet only 29% have a process for avoiding duplication of AI services across the organisation in various departments.  

Across the annual reports, the term ‘automation’ is nearly seven times more prominent than upskilling, training or education. Overall, the relatively lower prominence of workforce-related topics suggests businesses may be underemphasising the need for investment in human capital alongside technological advancement.

There’s some complacency among business leaders that the workforce is well equipped to navigate the disruptions of AI and the new skills required to generate the best from it. Over half of leaders globally (56%) say they are confident their entry level workforce possesses the skills needed to use AI, while 57% say their entire organisation currently possesses the necessary skills to effectively use AI tools in their daily tasks.

Further, 55% state that they’re confident their organisation can train staff to use generative AI critically, strategically and analytically.

One-third (34%) of respondents have a dedicated learning and development programme in place to support AI training. A higher proportion (64%) say they’ve received training to use or manage AI safely and securely, suggesting that the fear of AI may be driving reactive training rather than proactive capability-building.

The new report follows on from earlier research conducted by the BSI into the impact of the roll-out of generative AI on roles and work patterns.

Company Info

Western Business Media

Dorset House
64 High Street
East Grinstead, England, United Kingdom
RH19 3DE
UNITED KINGDOM

01342 33 3714

Login / Sign up