Bold Business Logo

When Actors Age Out, Are Deepfakes the Answer?

A deepfake AI in action

Over the last year, there have been numerous videos appear on social media platforms depicting a famous actor or politician. Some of these have been made for entertainment or educational purposes. Others have malintent and are designed to influence what recipients believe about that person or their message. Naturally the intentions matter, but what’s more important is our ability to determine deepfake AI from the real deal. And thus far, the technology seems to be winning with detection capacities lagging behind. As such, AI and disinformation risks are growing.

We live in an era with disinformation and fake news abound. The rise of social media and its capacity to touch millions of people’s lives have contributed to such developments. But advancing technologies in artificial intelligence (AI) and machine learning are complicating matters to an even greater extent. In fact, the ability to tell what’s real and what’s constructed through deepfake AI has become increasingly difficult. And it now appears that AI and disinformation has extended into film and entertainment as well.

“Now that we live in a society of social media channels and information on demand, this world has become flooded with phony or even fraudulent information. Spam has grown into social spam.” – Lutz Finger, Faculty at Cornell University and President of Product and Technology at Marpai Health

Will the Real Bruce Willis Stand Up

Recently reported in multiple news outlets, the likeness of Bruce Willis appeared in a Russian advertisement. A Russian company, Deepcake, created Willis’s image using deepfake AI. By using old film segments of Willis from prior movies, AI was used to superimpose his image on another actor. In watching the commercial, it’s quite challenging to tell that it’s not Bruce Willis himself, although a notably younger version. And this is just one such example as of late. Similar deepfake technology efforts have used AI and disinformation regarding other actors. A deepfake Tom Cruise has gained significant popularity on TikTok as has ones involving Keanu Reeves.

These developments in deepfake AI certainly offer some important benefits. For example, such AI likenesses for Bruce Willis might be welcomed. The actor recently retired as a result of suffering from language problems known as aphasia. Plus, these technologies could save filmmaking producers millions by replacing expensive CGI technologies. But more concerning is the impact it might have on entertainment and actors in general. The rights concerning an actor’s voice and image are certainly their own. But currently, there are no laws or restrictions preventing deepfake companies pursuing such videos. This could be detrimental in many ways, including creating AI and disinformation campaigns to actors’ detriment.

(Artificial intelligence in film making? Yes, it’s real, and Bold has the story.)

“[Deepfake AI] pollutes the information ecosystem, and it casts a shadow on all content, which is already dealing with the complex fog of war. The next time the president goes on television, some people might think, ‘Wait a minute — is this real?’” – Hany Farid, Professor at the University of California, Berkeley

From Actors to Politicians and Leaders

In considering AI and disinformation risks related to deepfakes, there is greater concern when it involves world leaders. For recent elections, a tremendous amount of deepfake AI videos have emerged creating serious confusion. Some have involved former President Obama as part of an educational campaign highlighting the potential for deepfake AI. Others involving Hilary Clinton and former President Trump have fewer desirable intentions but are just as impressive. One of the most notable ones even involved Ukraine’s President Zelensky allegedly forfeiting the war with Russia. While AI and disinformation involving actors is one thing, these types of videos are on a much more concerning level.

A cartoon person who is apparently fake
For the entertainment industry, deepfake AI can be good; for politics, not so much.

The use of deepfake AI to expand actors’ longevity is one thing. But all recognize the potential for harm when deepfake AI and disinformation is used for political advantage. In this regard, Facebook, Microsoft, and Google have all invested efforts to better detect deepfake videos and content. So has a number of other companies with their own version of deepfake detection software. Some of the ones available today include Counter.social, deeptrace, Reality Defender, and Sensity.ai. But to date, these programs aren’t very effective. Collectively, they have about a 65% accuracy rate, which isn’t too impressive. And even Microsoft’s Azure Cognitive Screen gets fooled 78% of the time. In essence, AI technologies are well ahead of cybersecurity software that can detect them. And this is particularly true for deepfake content that is well done.

“If you look at other contexts globally where the deepfake is poor quality, or of good enough quality to create room for doubt, and it’s not so easy to challenge it directly.” – Sam Gregory of Witness, a human rights group

AI and Disinformation Prevention Strategies

While social media and Internet companies are investing in deepfake AI detection, this alone is not reliable. Like spam filters when widespread email communications were introduced, it will take some time for this software to catch up. Thus, in the meantime, other strategies will be important in examining the use of AI and disinformation dissemination. Perhaps the most important strategy is simply one of education. Learning to scrutinize sources of information and cross-checking content is essential. It’s no longer enough to say that “seeing is believing” because deepfake AI is simply becoming too good.

Other important strategies in detecting AI and disinformation still need to be better developed. One approach relates to greater transparency of sources of information. An advertisement might contain what looks to be Bruce Willis. But without knowing its source, it will be hard to know for sure. However, if digital videos and other content were registered like non-fungible tokens, source verification could be known. In turn, the specific source could play a role in determining its accuracy. Finally, greater digital regulations surrounding image and voice rights need to be in place. When is it ok to use an actor’s or politician’s likeness without their permission, if ever? These are the types of questions that must be asked in order to better define the deepfake AI landscape. In some instances, deepfakes may indeed offer a great solution to a problem. But it’s clear they’re not the answer to all problems, especially when the risks of AI and disinformation are high.

 

Good morning, Vietnam! Read about how manufacturing is shifting to this Southeast Asian nation in this Bold story!

Don't miss out!

The Bold Wire delivers our latest global news, exclusive top stories, career
opportunities and more.

Thank you for subscribing!