CONVENED FOR the first time by the UK and including the United States and China, along with the European Union, a meeting of representatives from leading nations with attentions squarely focused on Artificial Intelligence (AI) has realised a world-first agreement at Bletchley Park. That agreement establishes a shared understanding of the opportunities and risks posed by frontier AI and the need for Governments to work together to meet the most significant challenges.
The Bletchley Declaration on AI Safety is underpinned by 28 countries from across the globe (encompassing Africa, the Middle East and Asia, as well as the European Union, agreeing on the urgent need to understand and collectively manage potential risks through a new joint global effort designed to ensure that AI is developed and deployed in a safe and responsible way for the benefit of the global community.
Countries endorsing the Bletchley Declaration on AI Safety include Brazil, France, India, Ireland, Japan, Kenya, the Kingdom of Saudi Arabia, Nigeria and the United Arab Emirates.
The Bletchley Declaration on AI Safety fulfils key summit objectives in establishing shared agreement and responsibility in relation to the risks and opportunities in play in addition to a forward process for international collaboration on frontier AI safety and research, particularly so through greater scientific collaboration. Talks involving several leading frontier AI companies and experts from academia and civil society will see further discussions on understanding frontier AI risks and improving frontier AI safety.
Countries have agreed that substantial risks may arise from potential intentional misuse or unintended issues of control of frontier AI, with particular concerns raised in respect of cyber security, biotechnology and disinformation risks.
The Bletchley Declaration on AI Safety sets out agreement that there is “potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models.” Countries have also noted the risks beyond frontier AI, including those centred on bias and privacy.
Network of scientific research
Recognising the need to deepen the understanding of risks and capabilities that are not fully understood, attendees at the Bletchley Park gathering also agreed to work together to support a network of scientific research on frontier AI safety. This builds on Prime Minister Rishi Sunak’s announcement concerning the UK’s move to establish the world’s first AI Safety Institute and complementing existing international efforts (including those at the G7, the OECD, the Council of Europe, the United Nations and the Global Partnership on AI).
In practice, this will ensure the best available scientific research can be used to create an evidence base for managing the risks, while in parallel unlocking the benefits of the technology, including through the UK’s AI Safety Institute, which will look at the range of risks posed by AI.
The Bletchley Declaration on AI Safety details that the risks involved are “best addressed through international co-operation”. As part of agreeing a forward process for international collaboration on frontier AI safety, the Republic of Korea has agreed to co-host a mini virtual summit on AI in the next six months. France will then host the next in-person discussion summit in a year from now.
This ensures an enduring legacy from the summit and continued international action to tackle AI risks. Informing national and international risk-based policies across these countries is very much part of the mix.
Further, the Bletchley Declaration on AI Safety acknowledges that those developing unusually powerful and potentially dangerous frontier AI capabilities have a particular responsibility for ensuring the safety of these systems, including by implementing systems to test them and implementing other appropriate measures.
Prime Minister Rishi Sunak commented: “This is a landmark achievement that sees the world’s greatest AI powers agree on the urgency behind understanding the risks of AI, in turn helping to ensure the long-term future of our children and grandchildren. Under the UK’s leadership, more than 25 countries at the AI Safety Summit have stated a shared responsibility to address AI risks and take forward vital international collaboration on frontier AI safety and research.”
Sunak added: “The UK is once again leading the world at the forefront of this new technological frontier by kick-starting the conversation, which will see us work together to make AI safe and realise all of its many benefits for generations to come.”
Michelle Donelan (Secretary of State for Science, Innovation and Technology) observed: “This new agreement offers an important first step. We have always said that no single country can face down the challenges and risks posed by AI alone. The landmark Bletchley Declaration on AI Safety heralds the beginning of a new global effort to build public trust by ensuring the technology’s safe development.”
Donelan went on to state: “The discussions at Bletchley Park mark the start of a long road ahead. The AI Safety Summit kick-starts an enduring process to ensure every nation and every citizen realises the boundless benefits of AI.”
Foreign Secretary James Cleverly opined: “AI knows no borders, and its impact on the world will only deepen. The UK is proud to have begun the global discussion at Bletchley Park on how we ensure the transformational power of AI is used as a force for good by – and for – all of us.”
To mark the opening of the AI Safety Summit, His Majesty The King delivered a virtual address. His Majesty pointed to AI being “one of the greatest technological leaps in the history of human endeavour” and hailed the technology’s enormous potential to transform the lives of citizens across the world through better treatments for conditions like cancer and heart disease.
His Majesty also spoke of “the clear imperative to ensure that this rapidly evolving technology remains safe and secure” and “the need for international co-ordination and collaboration”. The King’s address signed-off with thanks for the vital role participants will play in laying the foundations for a “lasting consensus” on AI safety to cement its place as a force for good.
John Stringer, head of product at Next DLP (the risk and data loss prevention solutions specialist) noted: “Without clear global regulation on AI, Chief Information Security Officers around the world are currently grappling with the proliferation of generative AI tools, worrying about how best to manage and control usage and the risk of data use and loss. The AI Safety Summit is the first step towards agreeing on a set of guidelines upon which countries, businesses and those responsible for AI can work from, as well as propelling the UK to become a leader in regulating its uses.”
Stringer continued: “Many are wondering, however, where cyber security will fit into any AI global regulatory framework as well as AI’s role generally within this industry. Although not a key component of the AI Safety Summit, its importance cannot be understated within cyber security. Put simply, we are so far from unleashing AI’s potential in the industry, not least because the cost of innovation investment is extremely high and identifying further benefits is still very much in its infancy.”
Maintaining this theme, Stringer concluded: “For now, AI has a massive productivity advantage whereby we’re able to analyse and report high-risk activity at scale and quickly. Increasingly, this has become a business need with the heightened risk of insider threats and data loss being just two examples. Ultimately, the AI Safety Summit serves as a good litmus test of how much we’re willing to look objectively at AI’s benefits and advantages. Cyber security professionals and businesses alike must pay close attention.”
Laurie Mercer, director of security engineering at HackerOne (the cyber attack resistance management company) informed Security Matters: “The AI Safety Summit places a large emphasis on frontier models and theoretical risk, but the interest should be more focused on practical advice about how AI start-ups can build safe products for British businesses and consumers today. The risk is that there are large volumes of White Papers produced, but with no practical safety provisions outlined.”
According to Mercer, the question of how to regulate AI is difficult to answer as most of the security vulnerabilities, not to mention the advanced and risky capabilities, will be created in the future. “Today, what we know is that, in other areas, open collaboration and sharing of risks has been effective in controlling and mitigating risk.”
Examples of practical initiatives that can assist in the building of safe AI are Vulnerability Disclosure and Red Teaming. That is, proactively hunting for flaws in AI models or software systems and sharing them responsibly. Vulnerabilities like prompt injection and training data poisoning are already beginning to appear and can inhibit the economic benefits these new technologies might realise.
Mercer explained: “Those involved in the AI Safety Summit should think of us all building applications on top of AI models. When focusing on new standards to support governance, they should consider Red Teaming and Vulnerability Disclosure as key ways in which to keep on top of rapid technological evolution, while in tandem allowing companies and individuals to innovate.”
64 High Street, RH19 3DE
04478 18 574309