The threat of deepfake attacks is changing faster than ever.  

Accenture Cyber Intelligence (ACI) has conducted extensive research on the evolving deepfake threat landscape and found that cyberthreat actors have a high (and rising) intent, capability and opportunity to use deepfakes. 

Combined, these factors have created a dangerous threat landscape in which deepfakes are changing both in use and sophistication.  

The Growing and Changing Threat of Deepfake Attacks

The Growing and Changing Threat of Deepfake Attacks

The Anatomy of a Successful Deepfake Attack 

ACI research has determined that the success or failure of a deepfake attack centers on three primary parameters: the quality of the deepfake, the accuracy of the content and the delivery mechanism.  

If any of these elements fall short, so does the deepfake ploy.  

In 2024, ACI found that threat actors enhanced all three parameters in their attacks, drastically increasing their ability to conduct high-fidelity deepfake attacks against enterprises. 

Skyrocketing Demand for Deepfakes 

By filtering a dark web search for verified offerings from reputable sellers, typically designed for targeting enterprises, ACI identified a continuous increase in demand for deepfake services on key underground forums from the start of deepfakes on the dark web in 2022.  Between 2022 and 2023, demand spiked by more than 110%. ACI also projects an additional increase of 35% to 65% by the end of Q4 2024.  

This difference is even starker when comparing Q4 2024 with Q4 2022. The delta between the two quarters, about the purchase and sale of deepfake capabilities, increased by more than 370%. ACI assesses with moderate confidence that this skyrocketing demand indicates dark web criminals are increasingly willing to put money and effort behind acquiring and using deepfake capabilities.  

A Shift to Targeting Enterprises 

Threat actors are also significantly changing how they use deepfakes. In 2021, underground actors focused on using deepfakes for common fraud and cryptocurrency theft. However, in 2024, they instead focused on using deepfakes to bypass enterprise security measures and gain access to corporate accounts, indicating a shift to broader enterprise compromise.  

ACI research assesses that the most common types of deepfake attacks against enterprises are business email compromise (BEC) and vendor email compromise (VEC). ACI has observed multiple dark web threads dedicated to the combination of deepfakes and BEC.  

Dark Web Post Showcasing Interest in Combining Deepfakes With BEC Capabilities

This combination has resulted in sophisticated attacks and significant BEC compromises, including a loss of $25 million against a British engineering company in early 2024. However, while BEC and VEC activity is the most obvious application for the use of deepfakes, the threat is rapidly changing to a focus on enterprise compromise.  

Beyond BEC and VEC attacks, attackers are also using deepfakes to drive malicious job applications. If an attacker can successfully use a deepfake to land a job, they can then receive access to enterprise networks or corporate devices, resulting in data exfiltration or malware infiltration.  

In one example familiar to ACI, malicious actors used malware to obtain a company’s credentials, which they then used to join a company’s enterprise video conference. In the call, the actors employed a dynamic deepfake, which allows for real-time face swapping and vocal changes, to help convince employees to download a security update that was malware. Although the attack was mitigated by automated tools, stopping it from unfolding, the deepfake component of the attack was seemingly very successful.  

This attack is part of a key trend ACI observed throughout 2024: Deepfakes are increasingly portraying middle management or IT staff, as opposed to C-suite executives. ACI assesses with high confidence that this shift is the result of enterprises educating their employees on C-suite deepfakes, leading to increased skepticism toward unusual C-suite requests or behavior. Criminals are therefore shifting their focus to immediate superiors, IT staff, or middle management, since those roles are more likely to contact regular employees, making the ploys more believable.  

Wider Range of Deepfake Capabilities

Deepfake capabilities are also changing dramatically. Multiple dark web services offer access to powerful deepfake capabilities, from low-cost, basic deepfake generators to increasingly capable, high-end solutions. ACI has observed threat actors selling deepfake services for as little as $20 per minute to upward of $20,000 per minute.  

This breadth of options allows threat actors of all capabilities—from well-resourced threat groups to novice threat actors—to leverage deepfakes. For example, in June 2024, the threat actor d0ber18 advertised a real-time face-swapping service on the cybercrime forum Exploit.  

Deepfake Offering on Dark Web Forum 

In the post, the threat actor specified their tool works within WhatsApp, Telegram, Discord and “many other” services, which the actor later specified include any platform that uses a camera, making it useful for enterprise targeting.  

Real-time deepfakes (i.e., dynamic deepfakes) are a much rarer dark web offering than static, prerecorded deepfakes, making d0ber18’s offer particularly valuable. Furthermore, dober18 priced the service at $2,500 for indefinite rights, while ACI has often observed deepfake solutions advertised at between $400 and $1,000 per minute.  

ACI observed a sample video from d0ber18 and assessed the quality was surprisingly good for the cost. The solution worked well with head turns, blinking and mouth gestures, all typical giveaways for a potential deepfake. Most strikingly, the solution allowed items, such as hands, to pass in front of the face without the deepfake glitching.  

Increasing an Attack’s Credibility 

To ensure their deepfake attacks are convincing, malicious actors are increasingly focusing on more believable delivery, enhanced methods, such as phone number spoofing, SIM swapping, malicious recruitment accounts and information-stealing malware. These methods allow actors to convincingly deliver deepfakes and significantly increase a ploy’s overall credibility.  

For example, former notable threat groups such as LAPSUS$ and ALPHV BlackCat, as well as current groups such as Scattered Spider and RansomHub, have leveraged compromised enterprise credentials to deploy social engineering attacks from within an organization’s network perimeter. This method is used because an attack coming from a trusted corporate account naturally increases the credibility of the social-engineering ploy. A similar attack methodology can be deployed leveraging deepfakes. 

With these enhancements, malicious actors can integrate deepfakes into their attack chains to, for example, move laterally within a target’s network, deploy ransomware and gain initial access. 

Lower Barriers to Entry 

The barriers to creating a high-end deepfake are falling. As hardware and software have improved, malicious actors have needed less data, including less data and less quality data, to create a deepfake.  

For example, the ACI team has used only five minutes of high-quality video material to create a high-quality video deepfake—and two minutes of high-quality audio to create a low-quality vocal clone. Since most malicious deepfakes are based on public data sources – photos, text, audio and video footage posted online – these changes have made it significantly easier for dark web criminals to create deepfakes.  

Exploiting Publicly Available Information  

High-value deepfake targets, such as C-suite executives, key data custodians, or other significant employees, often have moderate to high volumes of data available publicly. In particular, employees appearing on podcasts, giving interviews, attending conferences, or uploading videos expose significant volumes of moderate- to high-quality data for use in deepfakes. This dictates that understanding individual data exposure becomes a key part of accurately assessing the overall enterprise risk of deepfakes.  

Furthermore, ACI research indicates industries such as consulting, financial services, technology, insurance and government often have sufficient publicly available data to enable medium-to high-quality deepfakes.  

Ransomware groups are also continuously leaking a high volume of enterprise data. This information can help fuel deepfake content to “talk” about genuine internal documents, employee relationships and other internal details. Leaked enterprise data is also increasingly indexed, searchable and organized on the dark web, making it easy for threat actors to find desirable information, such as budgets and invoices, for use in their deepfake attacks. 

Additionally, ransomware groups and notoriety-focused threat actors are increasingly leaking personal information as part of extortion events, including Social Security numbers, phone numbers, home addresses and passport or driver’s license scans. The exposure of high-fidelity personal data has allowed threat actors to target individuals with deepfakes as well.  

How to Fight Deepfake Attacks 

The combination of high-quality deepfake tools, readily available leaked enterprise data and compromised internal accounts has made deepfake attacks significantly more effective.   

In response, organizations and individuals alike should take proactive action to combat this emerging challenge. Employees should practice good personal cyber hygiene, and enterprises should consider it standard practice when profiling and assessing the risks affecting the organization to understand the available data on key employees and assess whether it could be sufficient to enable deepfake generation. Additionally, enterprises should extend deepfake-awareness training and mitigation techniques beyond C-suite executives to address the increasingly likely threat against other roles in the company.