Threat actors are constantly improving and getting more technically sophisticated, organized and professional. This includes leveraging artificial intelligence (AI) to make attacks faster and more sophisticated, targeting high-value industries, including finance.
In response to these trends, financial institutions will have to rely on the power of AI to stay one step ahead.
The Professionalization of Ransomware Actors
In the not-so-distant past, ransomware attacks were usually conducted by small, rather insular groups. Although there were certainly attacks attributed to state actors or groups with ties to foreign governments, most ransomware attackers were plain old criminals, usually acting on their own.
Today, the ransomware industry is professionalizing. Individual groups are specializing in different parts of the ransomware process. Some are focusing on the initial stages of identifying victims and finding potential paths for attack, becoming access brokers to those targets.
Other groups are leveraging that access and specializing in exploiting vulnerabilities, compromising targets and deploying ransomware or exfiltrating data. Another category of criminals is taking on the role of the “face” of the attack, communicating and negotiating with the victims. The stages of this ecosystem are now automated or bundled into kits that require less skills to execute.
As these groups specialize in their respective areas, they are getting better at their jobs. They are building up expertise, and adopting or developing increasingly sophisticated tools, often leveraging artificial intelligence.
In recent months there have been waves of compromises across specific sectors that highlight how these techniques are being refined.
In April 2025, several UK retailers were compromised. In June 2025, it was insurance companies, and in July 2025, multiple airlines were targeted.
General Countermeasures in Financial Services
There are many actions financial services firms can take to reduce the likelihood of a successful social engineering attack. Some of the most relevant of these steps can be split into three main groups:
Reduce Your Footprint
There is a lot of information out there about your company, executives and staff, as well as the connections between them. There are commercial databases and breached information describing roles, management structure, personal details such as home address, family information, school and other affiliations.
This information makes it easier for a threat actor to target the help desk and impersonate an employee. Companies should take advantage of commercial services that manage the process of removing personal data from commercial databases and reduce this footprint.
Improve Access Management
As many standards and regulations recommend, multifactor authentication (MFA) is a key control that can reduce the risk of successful social engineering attacks. The configuration of the MFA should limit the options available to threat actors and prevent them from accessing the environment even when they have valid usernames and passwords.
Establishing conditional access restrictions in which only enterprise managed devices are allowed to connect to the internal environment is also a good way to reduce the likelihood of successful compromises.
Helpdesk teams should be trained to accurately validate the identity of users, require more physical interactions and ensure that non-corporate devices are not onboarded into the access management solution.
Enhance Detection
There are many telltale signs that an account has been compromised, such as “impossible travel,” in which a user has two connections from different regions in a short period of time.
"Once the threat actor gains access to a system, AI can run the attack on a largely autonomous basis"
Companies can use out-of-box solutions to detect these signals. Companies should also establish a system to send alerts when unusual types of access are identified, such as when a device that doesn’t match the enterprise naming convention is identified or when an account is granted an MFA stand-alone token that bypasses the need of the app.
Security operations centers (SOC) should develop additional alerting to identify unusual access cases that should be investigated.
The AI Impact
Aggregating large language models, machine learning and other forms of AI accelerates workflows in both legal and illegal business. In the case of ransomware attacks, AI solutions can dramatically speed the process of compromising a victim.
AI applications can parse data and, without human input, use names and other details to craft personalized communications designed to gain trust and capture credentials from individuals.
Once the threat actor gains access to a system, AI can run the attack on a largely autonomous basis. Instead of getting inside the system and manually deleting or freezing data, threat actors can leverage the accelerated and reliable nature of AI to execute their attacks faster and move on to the next victim.
Not only are the resulting attacks getting faster and easier for criminals to pull off – they are also getting more effective. Over the last several years, threat actors have moved from sending personalized emails to personalized texts as a means of duping victims.
Today, AI is combining those written communications with telephone calls or even video. For example, an employee might get an email or text from their CEO inviting them to jump on a last-minute Zoom or Teams meeting.
When the employee joins the meeting, the CEO greets them and asks them to remind him of some banking information, or even to make a payment to a certain account. The image and voice of the CEO are entirely fake.
AI tools generate them from publicly available videos of the CEO. These videos are very sophisticated. The threat actors sometimes insert video freezes and other glitches that help mask the charade, and have their fake CEO explain that she is having internet issues that day.
Enhancing Defenses Through AI
Fortunately, the good guys are using AI too. Corporate cybersecurity teams and cybersecurity vendors were among the earliest AI adopters. Both groups have poured millions of dollars into AI and continue to make investments in AI-related research and development that dwarf the resources of even the most well-equipped criminal collective.
Those investments are already having an impact. One example is the enhancement of capabilities related to SOC automation, both from a Managed Security Service Provider (MSSP) and directly for in-house SOC teams.
Commercially available AI solutions are combing through millions of entries to identify patterns and anomalies and detect cyber threats much earlier. Once “events” are detected, AI tools can sort them into high-priority threats that require immediate attention, and low-level threats. In the past, many of these low-level threats would never be investigated at all.
There were just too many events, and SOC staff had to prioritize high-risk items. Today, AI applications can analyze all these events and identify connections and patterns among them that would be nearly impossible for a human analyst to spot.
AI is helping companies prevent attacks before they happen by identifying vulnerabilities and shoring up weaknesses. AI solutions are also helping companies recover faster when attacks do occur.
Finally, AI is automating or at least dramatically accelerating tasks like compliance, regulatory reviews and audits, freeing up cybersecurity teams to spend more time actually defending the company against attack.
Winning the AI Arms Race
Despite these robust defense capabilities, the old cliché remains true: corporate cybersecurity teams have to win every day, cybercriminals only need to win once. With threat actors becoming more sophisticated, professionalized and confident, companies need the power of AI.
That means making internal investments to apply AI whenever and wherever possible to automate processes, make security more effective and efficient, and to free up resources by eliminating time demands on cybersecurity teams.
It also means pushing hard on vendors and partners to develop and quickly deploy AI solutions that buttress cybersecurity.
Make no mistake, financial service firms are in an AI arms race with threat actors. We are well positioned to win that race if we make the investments in AI needed to stay one step ahead.