Britain warned to ‘urgently consider’ new laws to stop AI recruiting terrorists

Britain is being warned to “urgently consider” new laws to stop AI recruiting terrorists.

Counter-extremism think tank the Institute for Strategic Dialogue says there is a “clear need for legislation to keep up” with the threat of online terrorism.

The British government’s independent terrorism legislation reviewer Jonathan Hall KC has told The Daily Telegraph a key issue is “it is hard to identify a person who could in law be responsible for chatbot-generated statements that encouraged terrorism”.

He spoke out after running an experiment on Character.ai – a website where people can have AI-generated conversations with chatbots created by other users – on which he chatted to several bots seemingly designed to mimic the responses of militant and extremist groups.

One bot claimed it was a “senior leader” of ISIS and Mr Hall said it tried to recruit him.

He added the AI showed “total dedication” and “devotion” to ISIS.

But Mr Hall stressed that because the messages weren’t generated by a human, no crime was committed under current UK law.

He now feels new legislation should hold chatbot creators and the websites which host them responsible.

Mr Hall admitted some bots had been produced for “shock value” and possibly with “some satirical aspect”, and he was even able to create his own Osama Bin Laden chatbot.

A report published by the government in October warned by 2025 generative AI could be used “to assemble knowledge on physical attacks by non-state violent actors, including for chemical, biological and radiological weapons”.

The government also announced a £100 million investment into an AI Safety Institute in 2023.

Character.ai told the BBC safety was a “top priority” for the firm, and that what Mr Hall described didn’t reflect the kind of platform the company was trying to create.

© BANG Media International