The financial services industry is under pressure to build secure, resilient systems, given the high value nature of the information it holds alongside growing regulatory demands, such as the EU’s Digital Operational Resilience Act (DORA) Act.
This means the sector has tended to be ahead of the curve with developments in cybersecurity, including establishing mature corporate governance strategies and tackling the growing use of AI by attackers.
David Ramirez, CISO at fintech firm Broadridge, has spent most of his 30-year cybersecurity career working in financial services, with leadership roles at brands including Brown Brothers Harriman and Capital One.
Ramirez spoke to Infosecurity about how the financial sector has pivoted to enhance its corporate governance strategies and AI defensive capabilities, offering advice to other CISOs on how to approach these areas.
He discussed how AI is changing the types of skillsets required and who is winning the AI arms race between attackers and defenders.
Infosecurity Magazine: Research has shown that threat actors are extensively using AI to target the financial sector, such as deploying deepfakes to defraud companies. What new techniques/tools are you seeing being deployed in this space?
David Ramirez: We do a lot of threat intelligence and monitor the trends. Deepfakes are definitely starting to pop out. There are reports of both fake voice and video being used.
We’re also seeing information about large language models (LLMs) designed for phishing. It’s something that’s starting to appear more in threat intelligence.
It will accelerate once it gets to critical mass because threat actors also need to learn and adjust to the new techniques and capabilities. We’re going to see more of these attacks happening.
IM: What are the most effective deployments of AI in cybersecurity you have seen in the financial sector?
DR: Cybersecurity is in a privileged space because most of the work can be prioritized and aligned into different structures and models. For example, in governance, risk and compliance (GRC), we have seen good examples of AI being used to accelerate work, reviewing all the security policies, dealing with third-party risk management questionnaires.
Also, you can engage with the end users and delivery of training using AI. It simplifies the whole process, you don’t have to spend hours trying to find somebody to record a video. Now you can quickly create material.
From the detection perspective, it provides quick analysis and prioritization from alerts. It’s helping us react faster to events, getting to the right information at the right time.
We’re also exploring the topic of data loss prevention (DLP), so how to simplify and accelerate analysis of DLP alerts. It’s the same for access management.
Across the industry we see a lot of existing vendors adopting AI features and also startups coming up with agentic AI solutions to accelerate some of the work that we need to do.
It is putting us into a position where we can be faster and more efficient, enabling us to reallocate time and resources for the challenges that we have.
IM: Who do you believe is winning the AI arms race between attackers and defenders?
DR: The volume of new solutions on the defenders’ side is very encouraging. The industry as a whole has embraced AI and we see real solutions working and making things easier to manage.
There’s the cliche of the attacker only needs to be win once, cybersecurity teams have to win every day. That’s the imbalance of that arms race. But I see a lot of development, features and offers for defenders.
Right now, there’s a lot of investment, time and energy in this space, and we will have more time to do some of the other work.
Three years ago, we were in a situation where there wasn’t enough time to do all of the things that we wanted to do. But now AI and LLMs are becoming accelerators to move things faster and that’s a great opportunity.
IM: Has the rapid development of AI impacted the types of skills and roles you hire for in your cybersecurity team?
DR: About 10 years ago, we started to see the need to have staff with stronger coding skills, a stronger understanding of security as a code. APIs and scripting became more available.
Now AI is making that even more important, because you don’t really have the option of somebody just clicking on the screen and making a decision based on that. You need them to be able to engage with the AI agents and automate things.
"The type of skills that you need are now more towards automation, scripting and coding"
The type of skills that you need are now more towards automation, scripting and coding. Security professionals have also got to ensure that AI tools are aligned to have more mobility and flexibility so they can provide prompts and basic information to the analysts to do things faster.
The plan is to have more training for existing analysts in cases where they don’t have a background in automation and coding. Then for new analysts, to ensure they have that background where they can quickly adopt the new technologies.
It’s not common to find analysts with security and AI work experience. The reality is that for the next few years there’s going to be a lot of training, similar to what happened with cloud adoption, where initially you had to train analysts so they could learn about the specific cloud vendors and their controls. Now that’s an expectation, it’s easier to find someone with that background.
With AI, we’re going to have to go through a journey of training. Maybe two or three years from now it’s going to be an expectation that they have these skills.
IM: In 2024, you wrote an article for Infosecurity about the importance of corporate governance to achieve strong cyber resiliency. What are the key actions security leaders should take to ensure their strategy aligns with wider business needs?
DR: Understanding business objectives, how the firm is operating and what are the most important areas is key. Use that information to build a security program.
Each company is going to have different challenges, different markets, different regulations. Being able to translate those circumstances into the types of controls and flexibility you want in toolsets is very important.
Also, start providing visibility on the cybersecurity program, transparency through governance so the organization is aware of what the program is. You want them to understand and feel comfortable with the status. That means when they engage with clients and regulators, they have some of the context and background to share.
You want to enable them to give that message and to bring back some of the signals of what’s going on with the business – maybe clients are going to have expectations on you to do something new, maybe the regulators are planning to make some changes.
The security team cannot be everywhere, you need to leverage those ambassadors to drive the message.
IM: To what extent are business leaders becoming more aware and involved in organization’s cybersecurity? Are there any specific real-world examples of this trend that you can share?
DR: It has been an interesting journey, 30 years ago, I spent most of my time trying to convince people that cybersecurity is real. My interactions with executives and boards were almost theoretical presentations that things can go wrong.
Today, it’s a whole different world. We have board members asking good questions because they have seen attacks, they have read materials, they have taken classes.
There is a whole ecosystem where members of the board collaborate with different companies. They can see information, share anecdotes and questions.
I have seen the growth and evolution of how board management engage with the topic of cybersecurity. It’s now more tangible and real to them, they understand better the type of incidents that can happen and the financial and operational impact that could happen. That’s a positive change.
Also, from the perspective of clients who have gone through security incidents, it’s interesting to see how the mind shift changes within the firm. They understand things in a different way after they have gone through an incident and have been hands on for the process of recovery.
IM: If you could give one piece of advice to fellow CISOs, what would it be?
DR: Go back to the basics, use risk assessments to understand the priorities and gaps in your organization. Even with AI and all the new technologies that we have, if you miss the basics then you’re going to have challenges.
Having a detailed analysis and understanding of your ecosystem is key.
You can spend a lot of energy with a nice new tool, but you may have missed something important that you should have changed before.