Fraudulent activity is everywhere in the digital age, and it’s only getting smarter and more expensive. In 2023 alone, businesses in the U.S. lost over $12.3 billion to fraud, a number expected to more than triple to $40 billion by 2027, according to a Deloitte report. At the center of the surge is social engineering: The leading method behind 98% of cyberattacks.
And with the rise of generative AI (GenAI), these attacks have evolved into hyper-realistic scams that trick even the most cautious employees into transferring millions to fraudulent accounts.
Today, it is safe to say that social engineering has become the most dangerous and costly form of cybercrime that businesses face.
The Growing Sophistication of Social Engineering
Social engineering tactics started with more rudimentary tools like phishing campaigns, but then along came artificial intelligence (AI) and generative AI (GenAI). This elevated attack effectiveness to entirely new levels.
In the Digital Fraud: The Case for Change report, Deloitte highlights how AI is enabling fraud at scale. The fraudster now has access to easy-to-use tools that help them target both financial institutions and individuals with unprecedented precision and far greater success.
Types of Social Engineering Attacks
This more sophisticated and lucrative era of social engineering attacks employs a combination of AI technology, research and psychological tactics. Here’s a quick look at the most cunning plays in the GenAI outlaw’s handbook:
Deepfake Voices and Videos: One of the most concerning trends is the use of deepfake technology to impersonate senior executives and other personnel. A little more than a year ago, this type of attack was unheard of. Now, fraudsters are creating realistic videos or voice recordings of company leaders asking employees to transfer significant sums of money, all under the guise of urgent financial matters.
A big reason why these attacks are highly effective is that they exploit what might be the weakest link in a company’s cybersecurity program: human trust, especially trust in coworkers and those in high-level positions. These deepfakes are often so convincing, and employee trust is so strong, that the victims often fail to question the authenticity of the request.
Pressure Tactics: A common psychological tactic in social engineering attacks is the use of urgency and pressure. If exploiting the victim’s trust isn’t sufficient to drive action, fraudsters may claim that the requested payment is overdue or tied to an urgent deadline, such as a company acquisition. In some cases, they may even threaten disciplinary action or other negative consequences to push employees into bypassing established security protocols.
Business Email Compromise (BEC): Mass email phishing campaigns and spoofed email sender addresses now leverage AI to scour public records, social media profiles and data from previous breaches to craft far more convincing phishing messages. But BEC is often the entry point to more complex attacks. After hacking a CFO’s email account, attackers may quietly monitor internal communication for weeks to study banking relationships, vendor interaction and approval processes. Armed with that intelligence, they identify a legitimate vendor invoice, impersonate your trusted vendor and simply request payment be sent to a new account. The account is real and seems like your vendor, but it’s controlled by the attacker.
Real-World Examples of Social Engineering Attacks
The damage caused by social engineering is not just theoretical — it’s happening now and has already led to massive financial losses. Today, even the most well-resourced and successful companies are vulnerable to these new and innovative AI social engineering attacks.
Hong Kong Deepfake Attack (2024): This is arguably the first attack to fully showcase the possibilities and impact of a successful deepfake attack. As you have likely heard, an employee of a multinational corporation was tricked into transferring $200 million HK to cybercriminals after receiving what appeared to be a video call from their boss.
Arup Deepfake Attack (2024): A similar incident occurred with Arup, a British multinational company. Fraudsters used deepfake technology to impersonate the company’s CFO and convince an employee to authorize a $25 million transfer to an offshore account.
These show that even the most well-resourced and successful companies are vulnerable to new and innovative AI social engineering attacks that can result in millions in losses.
Why Current Approaches Fail
Why are some of the most well-resourced and successful companies vulnerable? Many organizations still rely on outdated methods of defense that fail to address the evolving nature of social engineering attacks. The biggest issue is the tendency to view these threats as primarily email-based, which may help explain why current approaches to mitigating these threats fall short.
For example:
Email Security Gaps: Today’s bad actors understand these defenses and are successfully operating under the detection radar. How? They target vendor accounts, hijack threads and manipulate email chain copies and files, all while impersonating that vendor’s employees to take advantage of the trust relationship. Since email security tools are not connected to the larger payments systems, they lack the full context of nuanced fraud and thus fail to see other signs of malicious activity.
Slow Manual Verification: Manual verification processes are often slow, as humans must scour through volumes of data and complex details. Ultimately, this results in inconsistencies and an inability to keep up with the speed at which fraudsters execute attacks.
Point Solutions: Many organizations use siloed point solutions that focus solely on individual transactions. By providing no insight into the entire payment process, this creates security gaps that attackers can exploit.
Weak Verification Processes: Vendor impersonation and invoice fraud often go undetected because verification processes are not rigorous enough to spot fake vendors or fraudulent requests. This can include inadequate document verification or a one-time due diligence that’s only done at the onboarding phase.
A New Approach to Combating Social Engineering
Social engineering is simply too innovative and complex for systems that cannot monitor the entire payments process, from email and ERP to payment systems. To effectively combat social engineering, businesses must move beyond fragmented security measures and implement a comprehensive approach capable of understanding the full scope of the threat and using advanced technology to detect and prevent these attacks before they cause harm.
The features that are key to diffusing these attacks include:
Comprehensive Contextual Insight: As touched on above, silos are the death knell to any modern cybersecurity approach. A strong defense system requires an integrated view of email, payment and vendor behavior data to detect irregular patterns across all stages of a transaction. It’s through this bigger-picture view that organizations can spot inconsistencies that might indicate fraud.
Proactive Monitoring of High-Risk Roles: Organizations should focus on securing roles that have access to funds and/or the authority to approve transactions. This is why many deepfake attacks involve the impersonation of CFOs. But don’t just focus on that position. Monitor the finance team, other high-level executives and employees in vendor-facing positions.
AI-Driven Fraud Detection: By leveraging tools such as behavioral AI that can detect anomalies in real time, businesses can identify synthetic threats, such as deepfakes or manipulated voices, across the transaction lifecycle. These systems should continuously learn from new attack methods to stay ahead of emerging threats.
Holistic Verification: A robust verification process should validate all payment-related requests before they are processed. This includes verifying the legitimacy of vendor information and payment instructions, which can prevent fraudulent transfers.
Continuous Monitoring: Real-time alerts and adaptive fraud detection systems can flag high-risk transactions and prevent them from going through. By implementing these measures, businesses can stop fraud before it impacts their bottom line.
Social Engineering is Escalating. Are You Ready?
Social engineering has rapidly emerged as one of the most costly cybersecurity threats, and we’re only at the beginning. The increasing availability and ease of use of AI and deepfake technology have captured the attention of fraudsters who are accelerating these threats at a breakneck pace.
If your defenses are stuck in the past, you’ll be outgunned. Businesses must respond with equally sophisticated, holistic and AI-enabled defenses. Only by adopting these proactive measures can organizations effectively combat social engineering attacks, mitigate financial losses and avoid becoming tomorrow’s headline.