Bold Business Logo
Search
Close this search box.

This Article Was Not Written by a Robot: A ChatGPT Story

Someone using the ChatGPT artificial intelligence

Over the last decade, there has been a great deal of talk and concern about job displacement. Robotics and automation naturally threaten some occupations, particularly when it comes to manufacturing and production. But the more skilled positions have been thought to be more protected from technology’s reach. But that appears to no longer be the case, at least when it comes to professions in media. The recent introduction of ChatGPT artificial intelligence, a AI-based chatbot, is showing just how far technology has come. And it’s not only the use of ChatGPT for content that is the most concerning.

(Automation can–and often–leads to nightmares in a customer service scenario, but there are the rare positive experiences. Read about it in this Bold story.)

Amazingly, ChatGPT artificial intelligence is capable of generating a variety of content that comes extremely close to human efforts. In fact, even language experts and professors have a hard time distinguishing between AI-generated content and that written by individuals. From a broader perspective, the use of ChatGPT for content could create a number of new issues. Its use in education is an obvious one as is its ability to put writers and journalists out of work. But ChatGPT might also threaten Google’s dominance in search and pose new cybersecurity worries. For these reasons, it’s worth taking a deeper dive into this newly released technology.

“Expect a flood, people, not a trickle…I expect I’m going to institute a policy stating that if I believe material submitted by a student was produced by A.I., I will throw it out and give the student an impromptu oral exam on the same material.” – Darren Hick, Philosophy Professor, Furman University

An Education Disruptor

In a recent experiment reported in the New York Times, language experts, professors and known authors were sampled regarding ChatGPT. They were asked to distinguish material written by fourth graders from that from ChatGPT artificial intelligence. The instructions provided to the AI-based system included the developmental age with which the writing should be constructed. The results of the test showed that none of the panel were able to consistently tell the difference between the materials. While some facts may have been misreported or false, the actual writing itself was excellent. Not only was it creative, engaging, and well-structured, but it was also original.

The obvious problem with this relates to academics in general. There’s little doubt students will use ChatGPT for content for a number of assignments. ChatGPT can compose messages, write emails, complete essay assignments, and even write poetry. And it does so remarkable well. Plus, because ChatGPT artificial intelligence uses probabilistic distribution sampling, it never generates the same content twice. Every written material produced is original and unique, which makes it impossible for plagiarism software to detect. Unless professors identify factual errors or poor citation, identifying ChatGPT-generated content is extremely hard. And providing it is even harder.

“No company is invincible; all are vulnerable. For companies that have become extraordinarily successful doing one market-defining thing, it is hard to have a second act with something entirely different.” – Margaret O’Mara, Professor, University of Washington

A Tech Titan Disruptor

Educational systems (and journalists) are not the only ones concerned about the use of ChatGPT for content creation. Given its ability to interact with users in a conversational way, ChatGPT artificial intelligence could enhance search. As such, some suggest it could overtake Google and its two-decade-long dominance in the field. Not to be taken lightly, Google has ramped up its own efforts in AI to protect its turf. Reportedly, they see ChatGPT as a potential threat as well and have pulled staff from other projects to focus on this. Could it be that Google is at risk for displaced as well?

Most experts in the field don’t think Google has much to fear at this point. But that doesn’t mean that other artificial intelligence products may soon be available triggering more serious concerns. The transformer model used in in ChatGPT artificial intelligence is capable of understanding natural language. Therefore, similar and more advanced models could enhance the ability to find information in a more precise manner. The use of ChatGPT for content search may not be that advanced yet. But Google is aware that technologies down the road will likely be.

“ChatGPT highlights two of our main concerns – AI and the potential for disinformation. AI signals the next generation of content creation becoming available to the masses.” – Steve Grobman, Senior Vice President and Chief Technology Officer at McAfee

A Cybersecurity Threat

When it comes to ChatGPT artificial intelligence, it is capable of doing many things. It can make conversation and ask follow-up questions. It can challenge incorrect premises and even admit its own mistakes. But one of the important aspects built into the use of ChatGPT for content is its ability to reject inappropriate requests. For example, requests for writing malware o phishing code is something that would be refused. But that doesn’t mean there aren’t still additional cybersecurity risks associated with these new technologies. Though it may not directly create malicious codes for hackers, it can still be utilized in concerning ways.

The ChatGPT artificial intelligence on a phone
The ChatGPT artificial intelligence is a great innovation for content creation, but is it ready to replace human content creators?

Notably, the most likely misuse of ChatGPT artificial intelligence outside of academics involves the creation of disinformation. Already, countries are leverage such campaigns to undermine nation’s security as are isolated groups within nations. The concern with the use of ChatGPT content is its ability to create more real, semi-factual and believable material. This can then be used in a more convincing manner to persuade others. In addition, phishing content can also potentially be created in a more refined and tailored way with ChatGPT. Called “spear-phishing,” ChatGPT makes it much easier to generate highly targeted campaigns. These are some of the most concerning aspects of these new AI-based developments.

New Threats Brings New Opportunities

It’s quite clear that ChatGPT artificial intelligence ushers in a new era of technology that will have many uses. The vast majority of these uses can serve to make useful information more available and to aid composition. When the use of ChatGPT for content is done well, these are worthwhile benefits to humankind. But at the same time, all new technologies have pitfalls that must be considered. In this regard, ChatGPT will pose new threats which will require new solutions and safeguards. Yet, this is where job opportunities for new advances and discoveries lie. Indeed, ChatGPT may soon disrupt many industries previous though to be untouchable. But it will also create a demand for new solutions that businesses will need to address.

 

Sometimes businesses are like rose bushes, and need to be pruned to grow. Read more about prune and grow, and Musk’s use of it, in this Bold story.

Don't miss out!

The Bold Wire delivers our latest global news, exclusive top stories, career
opportunities and more.

Thank you for subscribing!