Bold Business Logo

The Issues with Detecting Deepfake AI Content

(Editor’s note: Welcome to Bold’s series on AI-created deepfakes. The previous installment dug into the pros and cons of using them for digital marketing; read it here. This installment dives into detecting deepfake AI content.)

AI creating deepfake phishing attacks
Deepfake phishing attacks are just one of many digital wrinkles to come from AI-generated content.

Artificial intelligence-driven chatbots may have created a buzz online, but deepfake content is the double-edged sword wielded by digital marketers and bad actors alike, with the latter using them as a tool for numerous cybercrime issues worldwide. In 2022, identity fraud became one of the top-committed cybercrimes in North America. Although its numbers are not as high as other fraud types, deepfake fraud doubled in numbers in less than a year. The growing proportion of deepfake crimes in the past year was mainly due to deepfake technology becoming easier to use and manipulate. In addition to its convenience, deepfake files are now harder to identify.

Recent studies showed that at least 70% of online users do not know about deepfake, and 57% have difficulties recognizing them. But the most concerning number is nearly half of the worldwide netizens can’t distinguish original media files from fake and manipulated materials.

The Rising Number of Deepfake Phishing Attacks

Deepfake technology allows AI to create and manipulate videos, audio, and images to benefit its creators and possibly ruin its targets. The increasing number of deepfake apps and sites make these files easier to access and more convenient. Although it presents numerous advantages in many industries, it also poses high levels of enterprise IT risks, such as deepfake phishing, a cybercrime that involves deepfake content usage to trick targets into making unauthorized payments. They can also manipulate victims into providing sensitive details that cybercriminals could use to their advantage.

In the past few years, deepfake phishing attacks have evolved and now fall into several categories. Two of them are real-time attacks and nonreal-time attacks.

  • Successful real-time deepfake phishing attacks use highly sophisticated deepfake files that can easily trick victims into believing it is genuine. Interactions in these attacks often use events with a strong sense of urgency, like imaginary deadlines and penalties. Attackers consistently enforce action on their victims until they panic and react.
  • Nonreal-time deepfake phishing attacks use deepfake files to impersonate someone with their preferred message, which they distribute through numerous communication channels. Unlike real-time attacks, these crimes reduce the pressures on criminals by removing time concerns. This allows them to make the most successful deepfakes that could slip past security filters. In addition, nonreal-time attacks cast wider nets and victimize larger groups of people.

Training security leaders to enforce awareness of these attacks to users is essential. The best way is to have regular security awareness training in engaging and competitive settings.

Hit pause is a principle of all security awareness training. Many real-time deepfake phishing attacks rush their victims into making ill-advised decisions through fear and panic. To prevent events like this from escalating, people need to know how to stay calm under pressure and check the veracity of the file. Hitting pause trains people to respond politely and use separate channels for identity confirmation.

Challenging the other party is another highly effective technique to fight deepfake phishing. Security awareness training help improve a user’s eye for detail, and they can use this to challenge suspicious file interactions. Challenging revolves around using information that both people should know, like personal information, deeper likes and dislikes, and fact-checking memories.

Deepfake Video Detection in action
Deepfake video detection is possible, but it requires an understanding of the nuances and limitations of the technology.

Detecting Deepfakes

Detecting deepfake files is becoming an increasingly problematic ordeal. Several experts noted that deepfakes would only continue to become more hyperrealistic and almost indistinguishable. In addition, not every deepfake AI detector has 99% efficacy. But there are details people can focus on to help identify deepfake videos.

Flashes

Focus on how many times an image in the video flashes. Real people do it far more, while deepfake files do it less. Sometimes they look forced and unnatural.

Face and Body

Full-body forgery generation takes time and work. Due to this, most deepfakes limit themselves to face substitutions. To detect a deepfake, look for the inconsistency between the face and the body’s proportions. You can also check facial expressions and body movements or posture.

Video Length and Sound

Successful deepfakes take hours and mastery of working the algorithm, which makes them only a few seconds long. In addition, be mindful of everyone’s lip movement in the video, as this will help you catch if the audio matches the action.

Inside of the Mouth

One notable fact about deepfake technology is how badly it mimics or reproduces the target’s tongue, teeth, and oral cavity when they speak in a deepfake file. The Ai tools then blur the inside of the mouth instead.

 

ChatGPT is already being used to flood the short story market with bad fiction–check out this Bold story to see its impact.

How can we help?