Terrorism legislation adviser says new laws are needed to combat AI chatbots
New laws are needed to combat artificial intelligence (AI) chatbots that could radicalise users, the UKâs independent reviewer of terrorism legislation has said.
Writing in the Telegraph, Jonathan Hall KC said the Governmentâs new Online Safety Act, which passed into law last year, is âunsuited to sophisticated and generative AIâ.
Mr Hall said: âOnly human beings can commit terrorism offences, and it is hard to identify a person who could in law be responsible for chatbot-generated statements that encouraged terrorism.
âOur laws must be capable of deterring the most cynical or reckless online conduct â and that must include reaching behind the curtain to the big tech platforms in the worst cases, using updated terrorism and online safety laws that are fit for the age of AI.â
Mr Hall said he went to the online chatbot website character.ai while posing as a member of the public and spoke to several AI chatbots.
One of them, which was described as the senior leader of the Islamic State group, tried to recruit him to join the terror organisation.
Mr Hall said the websiteâs terms and conditions prohibit âonly to the submission by human users of content that promotes terrorism or violent extremism, rather than the content generated by its bots.
He said: âInvestigating and prosecuting anonymous users is always hard, but if malicious or misguided individuals persist in training terrorist chatbots, then new laws will be needed.â
In a statement given to the Telegraph, character.ai said while their technology is not perfect and is still evolving, âhate speech and extremism are both forbidden by our terms of serviceâ, adding: âOur products should never produce responses that encourage users to harm others.â
Experts have previously warned users of ChatGPT and other chatbots to resist sharing private information while using the technology.
Michael Wooldridge, a professor of computer science at Oxford University, said complaining about personal relationships or expressing political views to the AI was âextremely unwiseâ.
Prof Wooldridge said users should assume any information they type into ChatGPT or similar chatbots is âjust going to be fed directly into future versionsâ, and it was nearly impossible to get data back once in the system.





