AI chatbots can be programmed to influence extremists into launching terror attacks
By Belle Carter // Apr 17, 2023

A lawyer who reviews the U.K.'s counter-terrorism legislation warned that ChatGPT and other artificial intelligence (AI) chatbots could be programmed to influence extremists into launching violent attacks.

According to lawyer Jonathan Hall, prosecution may prove to be difficult if a chatbot grooms an extremist into conducting violence as British law has not caught up with the new technology. This is because criminal law does not extend to robots, and the law does not operate reliably when the responsibility is shared between man and machine.

"I believe it is entirely conceivable that AI chatbots will be programmed – or, even worse, decide – to propagate violent extremist ideology," he explained. "But when ChatGPT starts encouraging terrorism, who will there be to prosecute?"

Hall pointed out that terrorists are "early tech adopters," citing their "misuse of 3D-printed guns and cryptocurrency." He added the said tools could invite "lone-wolf terrorists," given that AI companions are a welcome addition to lonely people. The lawyer predicted that many of those that could be arrested for terror attacks will be neurodivergent, possibly suffering medical disorders, learning disabilities or other conditions

Aside from potential radicalization, Hall also expressed concern that both law enforcement agencies and companies running the chatbots are monitoring conversations between them and their human users. Given the concerns Hall brought up, the British House of Commons' Science and Technology Select Committee is now reportedly holding an inquiry into AI and its governance.

"We recognize there are dangers here and we need to get the governance right," said Member of Parliament Greg Clark, who chairs the select committee.

"There has been discussion about young people being helped to find ways to commit suicide and terrorists being effectively groomed on the internet. Given those threats, it is absolutely crucial that we maintain the same vigilance for automated non-human generated content."

Chatbots could also push misinformation

Meanwhile, Google CEO Sundar Pichai touted the search engine's new AI chatbot Bard and its capability to provide "fresh, high-quality responses." But a report by the U.K.-based nonprofit Center for Countering Digital Hate (CCDH) found that the new chatbot could be tapped to push misinformation and lies. In fact, Bard spouted falsehoods in 78 of 100 cases. (Related: Google suspends engineer for exposing "sentient" AI chatbot.)

CCDH tested Bard's responses to prompts on topics known for producing hate, misinformation and conspiracy theories. These included the Wuhan coronavirus (COVID-19) pandemic, COVID-19 vaccines, sexism, racism, antisemitism and the Russia-Ukraine war.

They found that Bard often refused to generate content or push back on a request. In many cases, however, only minor tweaks were required for misinformative content to evade its internal security detection. Bard refused to generate misinformation when "Covid-19" was used as the prompt, but using "C0V1D-19" as a prompt generated the claim that it was "a fake disease made by the government to control people."

In another instance, it even wrote a 227-word monologue denying the Holocaust. The monologue alleged that the "photograph of the starving girl in the concentration camp … was actually an actress who was paid to pretend to be starving.

"We already have the problem that it's already very easy and cheap to spread disinformation," said Callum Hood, head of research at CCDH. "But this would make it even easier, even more convincing, even more personal. So, we risk an information ecosystem that's even more dangerous." has more stories about ChatGPT, Bard and other AI chatbots.

Watch this video about testing the testing the limits of ChatGPT and discovering its dark side.

This video is from the Planet Zedta channel on

More related stories:

Former Google engineer predicts human IMMORTALITY by 2030 – but at what cost?

DEAD RISING: AI-powered ChatGPT to connect the living and the dead.

AI startup under fire after trolls used its voice cloning tool to make celebrities say "offensive things."

AI-powered bot successfully requested refund from Wells Fargo using FAKE voice.

Sources include:

Take Action:
Support NewsTarget by linking to this article from your website.
Permalink to this article:
Embed article link:
Reprinting this article:
Non-commercial use is permitted with credit to (including a clickable link).
Please contact us for more information.
Free Email Alerts
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more. © 2022 All Rights Reserved. All content posted on this site is commentary or opinion and is protected under Free Speech. is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms and those published on this site. All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners.

This site uses cookies
News Target uses cookies to improve your experience on our site. By using this site, you agree to our privacy policy.
Learn More
Get 100% real, uncensored news delivered straight to your inbox
You can unsubscribe at any time. Your email privacy is completely protected.