Brian Sims
Editor
Brian Sims
Editor
SECURITY MUST be the primary consideration for developers of Artificial Intelligence (AI) in order to prevent the designing of systems that are vulnerable to attack. That’s the clear message from Lindy Cameron, head of the National Cyber Security Centre.
Speaking at the Chatham House Cyber Conference 2023, Cameron highlighted the importance of security being “baked into” AI systems as they’re developed rather than this element being an afterthought. Cameron also emphasised the actions that need to be taken by developers in order to protect individuals, businesses and the wider economy from “inadequately secure” products.
On an annual basis, the Chatham House Cyber Conference witnesses leading experts gather to discuss the role of cyber security in the global economy and outline the collaboration required to deliver an open and secure Internet.
Cameron commented: “We cannot rely on our ability to ‘retrofit’ security within technology in the years to come, nor expect individual users to solely carry the burden of risk. We have to build-in security as a core requirement as we develop the technology.”
Further, Cameron noted: “Like our US counterparts and all members of the ‘Five Eyes’ security alliance, we advocate a ‘secure by design’ approach wherein vendors take more responsibility for embedding cyber security into their technologies, as well as their supply chains, from the outset. This will assist society and organisations alike in realising the benefits of AI advances, while also helping to build trust that AI is safe and secure to use.”
Cameron continued: “From experience, we know that security can often be a secondary consideration when the pace of development is high. AI developers must predict possible attacks and identify ways in which to mitigate them. Failure to do so will risk designing vulnerabilities into future AI systems.”
Global leader
The UK is a global leader in AI and boasts an AI sector that contributes a substantial £3.7 billion to the economy while employing 50,000 people. The nation will host the first-ever summit on global AI safety later on this year to drive targeted and rapid international action in order to develop the international guardrails needed for the safe and responsible development of AI.
Reflecting on the National Cyber Security Centre’s role in helping to secure advancements in AI, Cameron highlighted three key themes on which her organisation is keenly focused. The first of these is to support organisations in understanding the associated threats and how to mitigate them. Cameron noted: “It’s vital that individuals and organisations using these technologies understand the cyber security risks involved, many of which are novel. For example, machine learning creates an entirely new category of attack: the adversarial attack. As machine learning is so heavily reliant on the data used for the training, if that data is manipulated, it then creates the potential for certain inputs to result in unintended behaviour, which adversaries can look to exploit.”
The second core theme Cameron discussed is the need to maximise the benefits of AI for the cyber defence community. On the third, the NCSC’s leader emphasised the importance of understanding how adversaries – whether they be hostile states or cyber criminals – are using AI and how they can be disrupted.
Cameron stated: “We can be in no doubt that our adversaries will be seeking to exploit this new technology to enhance and advance their existing tradecraft. LLMs also present a significant opportunity for states and cyber criminals. They lower barriers to entry for some attacks. For example, they make writing convincing spear-phishing e-mails much easier for foreign nationals without strong linguistic skills.”
*Further information is available online at www.ncsc.gov.uk
Dorset House
64 High Street
East Grinstead
RH19 3DE
UNITED KINGDOM
01342 31 4300