When people think about cybersecurity, they often picture a hooded figure in a dark room, hammering away at a keyboard, trying to break through a digital perimeter. It’s a familiar image, and an increasingly misleading one. Today’s reality isn’t man vs. machine; it’s human vs. human.
What is different today, is you no longer need to be deeply technical or exceptionally skilled to play the attacker. It’s here that AI is lowering the barrier to entry and changing the rules of engagement.
Yes, generative AI has given security professionals new ways to accelerate detection and response. But it’s also given bad actors a powerful upgrade in their ability to deceive. Well-written phishing emails, deepfake voice calls, and highly targeted social engineering attacks are now faster to produce and harder to detect. The tools of the trade are persuasion, misdirection, and manipulation, which means your people, not your perimeter, continue to be the most exposed surface area.
The Same Attacks, Better Disguises
AI allows bad actors to industrialise deception. Phishing emails are no longer poorly written or obviously generic. They’re fluent, well-timed, and context aware. Voice cloning and AI-driven call scripts can mimic tone, authority, and urgency with alarming accuracy. Social engineering attacks that once took weeks to plan and prepare can now be executed in minutes.
From a defender’s perspective, that acceleration matters. It shortens reaction times, increases volume, and raises the cognitive load on people making decisions at the frontline. But it doesn’t fundamentally alter the nature of the threat. The door is still opened or closed by a human being.
Why People Remain the Weakest Link and the Strongest Defence
Security professionals often describe people as the weakest link. There’s truth in that, but it’s only half the story. People are difficult to control, but they’re also the only part of the system capable of judgement. Firewalls don’t get suspicious. Policies don’t feel uneasy. Automated controls can enforce rules, but they can’t interpret intent.
That matters because modern attacks are designed to bypass technical controls by targeting psychology instead. Fear, urgency, authority, and confusion are the weapons of choice. The goal isn’t to break a system, it’s to rush someone into making a bad decision before they have time to think.
AI amplifies this by making those attacks feel more legitimate. This is why cybersecurity today is less about building higher walls and more about shaping better judgement.
The Danger of Over-Indexing on AI
There’s a risk, as AI adoption accelerates, that organisations focus too heavily on AI-driven defences while neglecting the fundamentals. New tools promise faster detection, better analytics, and automated response, and many of them deliver real value.
However, some of the most effective attacks we’ve seen recently have started with people. They’ve relied on classic social engineering techniques: password reset requests, authority impersonation, and carefully staged misinformation designed to create distraction.
In some cases, attackers don’t even need to breach systems at all. By engineering the perception of a breach, they can trigger confusion and panic. In one recent example involving a major social network, attackers convinced users they had lost control of their accounts, prompting frantic recovery attempts that created exactly the disruption the attackers needed. The breach wasn’t technical, it was psychological.
Breaches seen last year offer another clear example. Attackers were able to embed themselves on calls with security teams at one organisation, asking the right questions and learning where the pressure points were. They then moved on to other retailers, exploiting the same weaknesses with remarkable speed. The lesson is simple: attackers don’t exploit systems, they exploit people.
Education Without Fear
If people are central to security, the question becomes how to support them without paralysing them. The answer isn’t more rules, louder warnings, or harsher consequences. Fear doesn’t create good security outcomes. It creates knee-jerk reactions that attackers rely on.
At bet365, we focus on building confidence, as well as compliance. We want people to stop and think before they act. We want them to trust their instincts, and, crucially, we want them to feel safe escalating concerns, even if they turn out to be false alarms.
If someone feels something isn’t right, the correct response is never to push through under pressure. That might mean escalating a call from someone claiming to be from inside the organisation. If they’re requesting an urgent password reset, but unable to provide the correct credentials, it should be escalated. Even if they insist they’re the CEO.
Authority, urgency, and familiarity are classic pressure tactics. Pausing, checking, and handing it off isn’t a failure; it’s good security practice. Good security cultures reward that behaviour rather than punishing it. This approach recognises a simple truth: attackers exploit moments of isolation. Collaboration breaks that spell.
Guardrails, Not Handcuffs
AI also forces organisations to confront a long-standing tension in security: how to protect the business without stifling innovation. If you don’t trust your people, you can’t innovate. But if you remove all guardrails, you invite risk. The solution isn’t blanket restriction. It’s contextual control.
Different roles require different levels of access, flexibility, and autonomy. Policies need to be clear and consistent, but their application must be pragmatic. Exemptions aren’t failures, they’re deliberate, managed decisions that reflect how the business operates. AI doesn’t change that balance. It makes getting it right more important.
Security as Judgement at Scale
AI raises the stakes because it compresses time. Decisions that once allowed for reflection now demand near-instant judgement. That places enormous pressure on people, particularly those in frontline operational roles.
The organisations that will succeed aren’t those with the most sophisticated tools, but those that invest in judgement at scale. That means:
- Teaching people how attackers think, not just what rules to follow
- Encouraging pause over panic
- Treating security as a shared responsibility, not a specialist task
AI may be machine-driven, but cybersecurity outcomes are still shaped by human behaviour.
The Human Constant
AI will continue to evolve, tools will improve, attacks will become more convincing, and automation will play a bigger role on both sides of the equation. But one thing won’t change. Security isn’t about beating the machine. It’s about helping people recognise when they’re being played and giving them the confidence to act accordingly.
That’s where the real battle is fought, and it always has been.