The rate with which artificial intelligence is invading our world is nothing short of spectacular. AI platforms allow us to generate a variety of images, some artistic and others realistic, from simple text commands. Likewise, we can generate and refine volumes of content by providing a few directed prompts to AI platforms. And soon, we will have AI personal assistants and apps available to help us with our daily tasks. While these developments are exciting and have great potential, they also carry some serious risks. This is especially true in media where fake AI-generated articles are rapidly appearing. And some media professionals are concerned about the impact of AI on journalism moving forward.
The use of AI-generated content in media and other outlets isn’t new. In fact, some publishers have invested heavily in AI and machine learning as part of their new publishing strategy. But others have taken this a bit too far alleging fake AI-generated articles have been written by actual human authors. In recent weeks, even the reputable Sports Illustrated has been accused of such actions. Not only was the content produced via AI bots, but the actual journalist who reportedly wrote the content was AI-generated from image to bio. Given the pervasiveness of these types of media publications, the impact of AI on journalism is already being experienced. And these effects aren’t likely to be positive when it comes to accuracy and validity of the facts.
Fake AI-Generated Articles at Sports Illustrated
Recently, a report surfaced that accused Sports Illustrated of creating fake AI-generated articles as part of its publication. The content itself was part of the magazine’s product review sections, and thus, not a featured article. Regardless, the magazine did more than just produce AI-based text about the products. The author, which was portrayed with an actual human image and a detailed biography, was fabricated. Using AI tools as well, the content writer was developed to be as appealing as possible to the reader. Given this, how could the product review even be trusted? More importantly, how can the reader even now if the “person” giving the review is real? The impact of AI on journalism in this instance is clear, and it isn’t a favorable one.
This turn of events is a bit disheartening, especially considering the reputation Sports Illustrated has had over the years. However, the current publisher controlling content of the magazine is the Arena Group, and it uses third-party content providers. This is notably true for product review content, which was where the fake AI-generated articles were identified. In response to the accusations, the Arena Group suggested the source was AdVon Commerce, which was the other party in question. But even if true, Sports Illustrated and the Arena Group have a responsibility before publishing. Verifying a source, and in this case making sure that source is human, is essential when it comes to reporting accuracy. Deepfakes must also be investigated. If this is no longer being performed, then the impact of AI on journalism will be far from ideal.
One Among the Many
Unfortunately, the impact of AI on journalism isn’t limited to Sports Illustrated. In fact, other magazines under the Arena Group’s umbrella have also been noted to contain fake AI-generated articles. In addition to embarrassing fake AI content in The Street, fake AI journalists have been found here as well. Inaccurate AI-generated content was also found in The Men’s Journal, also published by this publishing group. In all of these instances, once the fake content was reported, the Arena Group took down the articles. But that didn’t mean they learned their lesson and changed their ways. In most instances, fake AI-generated articles would simply reappear under another fake AI journalist.
Other media publishing groups are participating in these activities as well. For example, Red Ventures, which published CNET and Bankrate, had released poor AI-produced content that contained inaccuracies. G/O Media has been found to have done the same in its publications Gizmodo and The A.V. Club. And it was reported earlier this year that BuzzFeed was going all in on AI content. This including a travel guide that had fake AI-generated articles and reviews. It’s evident that the impact of AI on journalism is expanding rapidly. It would seem that the need to drive ad dollars to these publications at the lowest cost is a key factor. AI content offers a simple, cheap, although messy, solution to this problem at this point. And it seems more media groups are jumping on the bandwagon every day.
The Impact of AI on Journalism
In many ways, the potential impact of AI on journalism could be beneficial. Having AI as a resource and tool to quickly capture information for review and to define key concepts promotes efficiency. When this same information is quickly validated by journalists, it can also improve thoroughness and provide new perspectives. But when used without such discretions or oversight, these benefits can quickly erode. The fake AI-generated articles and reviews being published by the media above lack such efforts. And as a result, the content that readers see cannot be verified and may even be nonsensical at times. It’s not surprising that media consumers are quickly becoming highly skeptical of anything they read.
These recent reports regarding Sports Illustrated and other major media groups is thus concerning. The digitization of media forced publishers to adopt different revenue models, and advertising clicks are still important sources. But sacrificing journalism ethics in the process isn’t the right way to go. Not only is this irresponsible as a steward of facts and the truth. But it will also result in declining readership in time as their trust wanes. Without question, the impact of AI on journalism can be both positive and powerful. But it requires being honest when fake AI-generated articles are used and limiting their role when oversight is lacking.