THE UK is on course for more agile Artificial Intelligence (AI) regulation. In responding to the AI Regulation White Paper consultation process, the Government is setting aside £10 million to support regulators with the necessary skills and tools required to address the risks and opportunities presented by this defining technology.
This funding allocation will assist regulators to develop cutting-edge research and practical tools to monitor and address risks and opportunities in sectors ranging from telecoms and healthcare through to finance and education. For example, this might include new technical tools for examining AI systems.
Many regulators have already taken action. For example, the Information Commissioner’s Office has updated guidance on how the UK’s strong data protection laws apply to AI systems that process personal data to include fairness and has continued to hold organisations to account by issuing Enforcement Notices.
The UK Government wants to build on such activity by further equipping regulators for ‘the age of AI’ as use of the technology ramps up. The UK’s agile regulatory system will simultaneously allow regulators to respond rapidly to emerging risks, while at the same time affording developers room to innovate and grow.
In a drive to boost transparency and provide confidence to British businesses and citizens, key regulators – among them Ofcom and the Competition and Markets Authority – have been asked to publish their approach to managing the technology by 30 April. It will see them set out AI-related risks in their areas, detail their current skill sets and expertise to address them and produce a plan for how they will regulate AI over the coming year.
All of the above forms part of the AI Regulation White Paper consultation response, which itself carves out the UK’s own approach to regulation and will ensure the nation can quickly adapt to emerging issues and avoid placing burdens on business, which could otherwise stifle innovation. This approach to AI regulation will mean the UK can be more agile than competitor nations, while also leading on AI safety research and evaluation. The objective is to chart a bold course for the UK to become a leader in the field of safe and responsible AI innovation.
The technology is rapidly developing. However, the risks and most appropriate mitigations are still not fully understood. The Government will not rush to legislate or, indeed, risk implementing ‘quick-fix’ rules that would soon become outdated or ineffective. Instead, the Government’s “context-based approach” means existing regulators are empowered to address AI risks in a targeted way.
For the first time, the Government has now set out its initial thinking for future binding requirements, which could be introduced for developers building the most advanced AI systems so as to ensure they’re held accountable for making these technologies sufficiently safe.
Potential for transformation
Michelle Donelan (Secretary of State for Science, Innovation and Technology) explained: “The UK’s innovative approach to AI regulation has made us a world leader in both AI safety and AI development. I’m personally driven by AI’s potential to transform our public services and the economy for the better, in turn opening the door to advanced skills and technology that will power the British economy of the future.”
Donelan added: “AI is moving fast, but we’ve shown that humans can move just as fast. By adopting an agile and sector-specific approach, we’ve begun to grip the risks immediately, which in turn is paving the way for the UK to become one of the first countries in the world to reap the benefits of AI safely.”
In parallel, circa £90 million will be put towards launching nine new research hubs across the UK and a partnership with the US on responsible AI. The hubs will support British AI expertise in harnessing the technology across areas including healthcare.
Further, £19 million will go towards 21 projects designed to develop innovative, trusted and responsible AI and machine learning solutions to accelerate deployment of these technologies and drive productivity. This will be funded through the Accelerating Trustworthy AI Phase 2 competition, which is supported through the UKRI Technology Missions Fund, and delivered by the Innovate UK BridgeAI Programme.
The Government will also be launching a Steering Committee to support and guide the activities of a formal regulator co-ordination structure within Government.
These measures sit alongside the £100 million invested by the Government in the world’s first AI Safety Institute to evaluate the risks of new AI models, and the global leadership shown by the UK hosting the world’s first major Summit on AI Safety at Bletchley Park last November.
The groundbreaking International Scientific Report on Advanced AI Safety, which was unveiled at the Summit on AI Safety, will also help to build a shared evidence-based understanding of frontier AI, while the work of the AI Safety Institute will see the UK collaborating with international partners to boost the nation’s ability to evaluate and research AI models.
The UK further commits to this approach with an investment of £9 million through the Government’s International Science Partnerships Fund, bringing together researchers and innovators in the UK and the US to focus on developing safe, responsible and trustworthy AI.
In essence, the Government’s response lays out a pro-innovation case for further targeted binding requirements on the small number of organisations currently developing highly capable ‘general purpose’ AI systems to ensure that they’re accountable for making these technologies sufficiently safe. This would build on steps the UK’s expert regulators are already taking to respond to AI risks and opportunities in their respective domains.
Hugh Milward, vice-president for external affairs at Microsoft UK, stated: “The decisions we take now will determine AI’s potential to grow our economy, revolutionise public services and tackle major societal challenges. We welcome the Government’s response to the AI White Paper consultation. Seizing this opportunity will require responsible and flexible regulation that supports the UK’s global leadership in the era of AI.”
Lila Ibrahim, chief operating officer for Google DeepMind, said: “I welcome the UK Government’s statement on the next steps for AI regulation, and the balance it strikes between supporting innovation and ensuring that AI is used safely and responsibly.”
Ibrahim continued: “The hub and spoke model will help the UK benefit from the domain expertise of regulators, as well as providing clarity for the AI ecosystem. I’m particularly supportive of the commitment to back regulators with further resources.”
In conclusion, Ibrahim noted: “AI represents an opportunity to drive progress for humanity. We look forward to working with the Government to ensure that the UK can continue to be a global leader in AI research and set the standard for good regulation.”
Moving forward “at speed”
Julian David, CEO at techUK, commented: “techUK welcomes the Government’s commitment to the pro-innovation and pro-safety approach set out in the AI White Paper. We now need to move forward at speed, delivering the additional funding for regulators and making sure the Central Function is up-and-running. Our next steps must also include bringing a range of expertise into Government, identifying the gaps in our regulatory system and assessing the immediate risks.”
David went on to state: “If we achieve this, the White Paper is well placed to provide the regulatory clarity needed to support innovation and the adoption of AI technologies that promises such vast potential for the UK.”
John Boumphrey, UK country manager for Amazon, observed: “Amazon supports the UK’s efforts to establish guardrails for AI, while also allowing for continued innovation. As one of the world’s leading developers and deployers of AI tools and services, trust in our products is one of our core tenets and we welcome the overarching goal of the White Paper.”
In addition, Boumphrey explained: “We encourage policy-makers to continue pursuing an innovation-friendly and internationally co-ordinated approach. We are fully committed to collaborating with Government and industry to support the safe, secure and responsible development of AI technology.”
Markus Anderljung, head of policy at Centre for the Governance of AI, remarked: “The UK’s approach to AI regulation is evolving in a positive direction. It’s heavily reliant on existing regulators and takes concrete steps to support them, while also investing in identifying and addressing gaps in the regulatory ecosystem.”
Anderljung is “particularly pleased” that the response acknowledges the need to address one such gap that has become more apparent since the White Paper’s publication: that is how the most impactful and compute-intensive AI systems are developed and then deployed in the market.
Response from the security community
Cyber security expert Andy Ward, vice-president at Absolute Software, has noted: “The heightened risk of cyber attacks, amplified by evolving AI-powered threats, makes vulnerable security systems a prime target for cyber attackers. By investing in secure, trusted and responsible AI systems, the Government’s initiative contributes towards strengthening the national cyber security infrastructure and protects against AI-related threats.”
Continuing that theme, Ward said: “Organisations must always look to adopt a comprehensive cyber security approach framed by proactive and responsive measures, especially so around rapidly evolving innovations such as AI. This involves assessing current cyber defences, integrating resilient Zero Trust models for user authentication and establishing complete visibility into the endpoint. This then affords organisations detail on device usage, location and which apps are installed, all underpinned by the ability to freeze and wipe data if a device should be compromised or lost.”
Oseloka Obiora, CTO at RiverSafe, informed Security Matters: “This investment is a good first step, but in tandem part of the investment should be targeted towards defence and response research in the face of some of the clearer threats understood around AI. These research activities should prioritise Critical National Infrastructure and treat scenarios posed through the use of AI now.”
Obiora also explained: “Boosting regulation is a key step forward, but we need to see much greater resources set aside for the inevitable fall-out when hackers and cyber criminals gain access to AI systems in an effort to wreak havoc and steal data. We need a much more ambitious and broader international strategy to tackle the AI threat, bringing together Governments around the world, as well as regulators and businesses to tackle this rapidly emerging threat.”
Jonathan Boakes, managing director at Infinum, stated: “Research shows that 78% of UK businesses plan to invest in AI in the next year or so, but 73% of them don’t feel prepared for its integration. Success in the AI revolution demands more than just plugging gaps with cash. It requires strategic planning, workforce training and expert collaboration in order to maximise the impact and prevent implementing AI for AI’s sake. The rush to embrace AI carries the risk of hasty decisions fuelled by the fear of missing out, which can then jeopardise sound judgement.”
The consultation has highlighted strong support for the five cross-sectoral principles, which are the very foundation of the UK’s approach and include safety, transparency, fairness and accountability.
The publication of the AI Regulation White Paper last March laid the foundations for the UK’s approach to regulating AI by driving safe and responsible innovation. This common sense and pragmatic approach will, according to the Government, now be further strengthened by robust regulator expertise, subsequently allowing individuals and teams across the country to safely harness the benefits of AI for times ahead.
64 High Street, RH19 3DE
04478 18 574309