The incredible capacity for AI tools to create various forms of content is amazing. Since AI platforms were introduced for public use, consumers and companies alike have used them to gain a competitive advantage. In many instances, AI has helped writers, artists, and even professionals better manage their workloads. As a tool, AI can enhance efficiency and productivity. But at the same time, AI can also be used in more unethical and even illegal manners. These types of uses range from plagiarized student essays, to fake book reviews, to now, deepfake fraud attacks. It is these latter attacks that have banking officials most concerned as fraud scams now appear to be occurring at an accelerated rate.
Deepfake fraud attacks have increased by a significant percentage this past year. In fact, compared to 2021, frauds and various social engineering scams were 44% more frequent in 2022. Social engineering scams specifically, which use AI audio to mimic actual acquaintances, have now become the most lucrative strategy. These are the types of AI advances that make it difficult for banks and governments to target scammers and fraudsters. Even with billions of dollars being spent annually, the ability to deter deepfake fraud attacks are becoming more challenging. Though such increases in fraud have been seen with other technological innovations, AI seems to be different. At least until better AI scam technologies are developed.
A Look at AI and Deepfake Fraud Attacks
According to the numbers, AI deepfake fraud attacks and scams cost banks, consumers and companies dearly. For the current year, such scams are on pace to cost over $8 trillion, which is comparable to Japan’s entire economic output. By 2025, analysts anticipate this figure will exceed $10.5 trillion, representing a threefold increase within the decade. In reality, however, this projection may be conservative given new AI techniques like social engineering scams. As AI strategies become increasingly complex and sophisticated, they’re likely to victimize more people for higher amounts. This includes investment-related scams, which still represent the most common fraud event today.
When it comes to AI and deepfake fraud attacks, scammers use a variety of techniques. AI can naturally be used to create realistic-appearing images and logos. It can also generate anonymous content to solicit investments and fake purchases from consumers. But the more concerning uses of AI with fraud relate to identity theft, fake accounts, and fake impersonations. Social engineering scams, for example, capture someone’s voice and generate money requests for money using that voice. AI audio interactions then take place with relevant family members in hopes of collecting money. What’s frightening is that only 30 seconds of a voice recording is required, which can often be obtained from social media. Social media also represents the source where family member information can be accessed.
Certainly, social engineering scams are worrisome, but these are just the tip of the iceberg. Deepfake fraud attacks also involve the creation of fake photographs and images of consumers. For example, many financial institutions today require photo-identification to establish a new account. As such, scammers today are using consumer images to create actual masks that they can then wear online. The masks themselves are produced via 3D printing technologies, and the quality can be extremely impressive. While some may look like Halloween masks, others are exceptionally details similar to film-production quality. And like social engineering scams, it only takes a limited number of images from social media to create these masks. These are the types of deepfake fraud attacks that AI has introduced with many others likely to come.
Defensive Technologies Playing Catch-Up
When it comes to addressing deepfake fraud attacks using AI, deterrence is a better term than preventing. Financial institutions are constantly in a defensive position and attempting to keep pace with new scammer techniques. This is particularly true with AI tools, which scammers constantly use to create innovative approaches. This includes new social engineering scams that attempt to trick people into believing AI is an authentic person. As one might imagine, banks spend billions of various defensive programs to catch or identify scams. This is because the volume of deepfake fraud attacks is tremendous. One bank in Australia tracks about 86 million scam events daily for a consumer demographic of only 26 million. This highlights the problems financial institutions face currently with AI threats.
While the battle rages on against deepfake fraud attacks from AI, banks do have some surveillance options. Many use defensive software that attempts to detect subtle changes in consumer behaviors. In fact, some can notice subtle differences in how a computer cursor navigates a page or differences in order processing. Others use Natural Language Processing (NLP) to identify patterns site movements and order destinations to seek out scams. And with social engineering scams, some are not only training staff but also software on detecting fake voices. Notably, these types of defensive technologies tend to be reactive rather than proactive. Therefore, their ability to prevent the next AI strategy for deepfake fraud attacks isn’t great. Their strategy is to thus stay as up to date as possible in an effort to minimize the impacts.
A New Era of Deepfake Fraud Attacks
The addition of AI tools has undoubtedly ushered in a new era of technology for many sectors. These include those involving deepfake fraud attacks and scams. From social engineering scams to fake identity look-a-likes, new challenges exist. While it might be ideal to detect AI-generated content and productions, such AI-detection software does not yet exist. Instead, surveillance and deterrence are the only real strategy when it comes to financial fraud. And banks are getting tired of footing the bill. Some pressure is being placed on governments to hold consumers responsible when they succumb to fraudulent requests. Others want technology platforms that fail to detect scams to also be financially responsible. But assigning accountability beyond scammers is tough, especially with so many different techniques being employed. This is why the perils of AI-developed scams are so concerning as we enter what looks to be a scammer’s haven.