In the field of computer science and mathematics, there is a common phrase used when bad inputs result in poor outputs. GIGO, which stands for “garbage in – garbage out,” highlights the importance of good quality information. Given this, it’s perhaps not too surprising many are concerned with the recent boom in artificial intelligence (AI) chatbots. Should the data sampled by these chatbots be false or wrong, then misinformation with AI becomes a worry. And for those who want to intentionally deceive, the opportunity for disinformation campaigns increases. It’s therefore essential to appreciate just how the use of AI chatbots might serve undesirable purposes.

Without question, the latest versions of AI chatbots like ChatGPT have ushered in some incredible potential. These platforms can be used to quickly and accurately create headlines, software code, blogs, SEO content and even academic essays. By providing ChatGPT and other AI chatbots the right information, users can quickly have quality content. And this content can’t be necessarily distinguished from human-written content. The problem occurs however when the wrong information generates misinformation with AI. This might not be as critical if such mistakes are honest in nature. But the bigger threat in today’s world involves individuals as well as nations interested in disinformation campaigns.
“This tool is going to be the most powerful tool for spreading misinformation that has ever been on the internet. Crafting a new false narrative can now be done at dramatic scale, and much more frequently…” – Gordon Crovitz, Co-chief Executive of NewsGuard
A History of Misinformation with AI Chatbots
The current version of Chat GPT uses the latest in AI technologies to create content for its users. As such, it has been touted as exceptional and far more advanced that prior versions of AI chatbots. And that’s a good thing should it be true since prior attempts haven’t been that impressive. This is even true for Microsoft, which is a leading investor in Open AI, producer of ChatGPT. In 2016, Microsoft launched its AI chatbot, Tay, for free use among the public. But within 24 hours, the company had to take it down. It wasn’t necessarily misinformation with AI that was the problem but instead racist and xenophobic content. Apparently, the AI chatbot had included biases and prejudices from various inputs into its analyses.
(This Bold Business article was not written by ChatGPT.)
Microsoft isn’t the only company that’s struggled with misinformation with AI and poor content output. Facebook/Meta experienced a similar issue in recent years when it launched its AI chatbot Galactica. Though it lasted a bit longer than Tay, Galactica was removed within 72 hours of its introduction. To date, Facebook/Meta has yet to release another version. However, this was all supposed to change with the latest AI chatbot technology. being able to discern content more thoroughly was supposed to reduce misinformation with AI. And with specific safeguards built into ChatGPT, disinformation campaigns were allegedly prevented and prohibited. But the more these systems are challenged, the less safe they appear to be in this regard.
“The danger is that you can’t tell when [ChatGPT was] wrong unless you already know the answer. It was so unsettling I had to look at my reference solutions to make sure I wasn’t losing my mind.” – Arvind Narayanan, Computer Science Professor, Princeton University
Policing AI Against Disinformation Campaigns
In terms of Chat GPT, Open AI does offer a moderating tool that is supposed to deter misinformation with AI. Specifically, when engaged, it is supposed to prevent content that contains hate, self-harm, violence, and sexual threats. While this tool does deter some undesirable content, it is far from comprehensive. For one, it’s mostly effective for English content, and secondly, it doesn’t prevent other aspects of misinformation. Disinformation campaigns that involve political deception are not readily detected. Likewise, it isn’t effective in identifying and preventing spam and malware. These are clear areas where improvements are needed.

Open AI also has stated it has a tool to distinguish content written by ChatGPT versus that by a person. However, here again, there are major shortcomings. When applied to specific materials, the tool only detected actual AI-written content 26 percent of the time. And when the materials were written by actual people, it alleged 9 percent of samples were AI-constructed. And to make matters even worse, the instrument struggled to an even greater extent when shorter passages were screened. At this point, it would appear that few resources exist to detect misinformation with AI. Likewise, similar shortcomings exist in identifying intentional disinformation campaigns.
“The amount of power that could be circulating because of [an AI Chatbot] tool like this is just going to be increased.” – Mark Ostrowski, Head of Engineering for Check Point
Lagging Behind AI Technology Risks

Currently, there are already a number of AI chatbots on the market with many offering free versions for trials. Each one has its own set of pros and cons, and all create content quickly and efficiently. Given this, the biggest concern when it comes to disinformation campaigns is the speed with which they may be created. Likewise, they can be produced at a much lower cost using AI chatbots when compared with human-generated content. Since most cannot distinguish between AI and human-generated materials, the potential risks are great. Vast amounts of misinformation with AI could be used for propaganda and malicious intentions.
One thing is for certain…there’s no putting the genie back in the bottle. ChatGPT and other current AI chatbots have ushered in a new era of content creation. And there’s plenty more to come. Google is set to release its own version of an AI chatbot called Baid in the near future. Baidu is about to launch ERNIE as well in the coming months. And Stability AI will be promoting image generators to complement content-writing AI platforms. Image creation expands the concerns about misinformation with AI even further, including disinformation campaigns. Given this, it’s evident solutions and safeguards will be needed. But the question is whether they’ll be in time before damage is done.