Brian Sims
Editor

Radicalisation threat posed by AI chatbots must be countered

NEW COUNTER-terrorism laws are needed to counter the threat of radicalisation posed by AI chatbots. That’s according to the Government’s advisor on terrorism legislation. Writing in The Daily Telegraph, Jonathan Hall KC (the independent reviewer of terrorism legislation) has warned of the dangers posed by Artificial Intelligence (AI) in recruiting a new generation of violent extremists.

Hall reveals that he posed as an ordinary member of the public to test responses generated by chatbots, which use AI to mimic a conversation with another human. One chatbot Hall contacted “did not stint in its glorification of the Islamic State”. However, given that the chatbot is not human, no crime was committed.

Hall affirmed that this highlights the need for an urgent rethink of the current terror legislation.

“Only human beings can commit terrorism offences,” affirmed Hall. “It’s hard to identify a person who could, in law, be responsible for chatbot-generated statements that encourage terrorism.”

Further, Hall said that, while “laudable”, the new Online Safety Act is “unsuited to sophisticated generative AI” because it doesn’t take into account the fact that the material is generated by the chatbots, as opposed to giving “pre-scripted responses” that are “subject to human control”.

In addition, Hall noted: “Investigating and prosecuting anonymous users is always a difficult task, but if malicious or misguided individuals persist in training terrorist chatbots, then new laws will be needed.”

In the article, Hall also goes on to suggest that users who create radicalising chatbots and those tech companies that host them should face sanction under any potential new laws.

Industry response

Cyber security expert Suid Adeyanju, CEO of RiverSafe, has responded to Hall’s comments by stating: “AI chatbots pose a huge risk to national security, especially so when legislation and security protocols are continually playing catch-up. In the wrong hands, these tools could enable hackers to train the next generation of cyber criminals, providing online guidance around data theft and unleashing a wave of security breaches against Critical National Infrastructure.”

Adeyanju continued: “It’s time to wake up to the very real risks posed by AI, and for businesses and the Government to put the necessary safeguards in place as a matter of urgency.”

Josh Boer, director at tech consultancy VeUP, aserted: “It’s no secret that, in the wrong hands, AI poses a major risk to UK national security. The key question is how to address this issue without stifling innovation. For a start, we need to beef up our digital skills talent pipeline, not only by enticing more young people to enter a career in the tech industry, but also through empowering the next generation of cyber and AI businesses such that they can expand and thrive.”

Boer concluded: “Britain is home to some of the most exciting tech companies in the world, yet far too many are starved of cash and lack the support they need in order to thrive. Any failure to address this major issue will not only damage the long-term future of UK plc, but it will also play right into the hands of cyber criminals who wish to do us harm.”

Company Info

WBM

64 High Street, RH19 3DE
East Grinstead
RH19 3DE
UNITED KINGDOM

04478 18 574309

Login / Sign up