The risk of insider threats is on the rise and businesses are concerned about the cybersecurity implications of intentionally malicious or negligent employees, research by Mimecast has warned.
According to the company’s State of Human Risk Report 2026, internal cybersecurity risk has grown across the board, to the extent that it should be treated as a “critical business threat.”
In many cases, the additional insider risk is because of employees mishandling or actively abusing AI tools
According to the report, cybersecurity leaders have concerns about the rise of AI in the workplace and the potential for large language models (LLMs) and other AI productivity tools to expand the potential attack surface which could be exploited by both external and internal threats.
Over the past year, 42% of organizations have reported an increase in threats from malicious insiders, employees who want to actively cause harm to their employer by stealing, manipulating or destroying data.
The same percentage (42%) reported a rise in cybersecurity incidents because of employee negligence.
These are incidents which occur because of careless actions by the employee which could have easily been avoided, such as transferring data insecurely using personal cloud accounts, using weak passwords or opening malicious links in phishing emails.
The report warns that that cyber attackers look to exploit this negligence – or indeed, actively malicious intent – to help gain access to accounts, files and systems and that the problem is growing.
According to the paper, concerns about malicious insiders from information security leaders has grown by 10% in the last year and IT and cybersecurity leaders expect to face an average of six insider-driven threats a month.
“Insider risk has become one of the most consequential and underestimated threats facing organizations today, not just because of the data loss it causes, but because attackers are increasingly exploiting insiders as a deliberate entry point to bypass perimeter defenses entirely,” said Mimecast CISO Leslie Nielsen.
Attackers also deploy AI tools themselves, using them to help create more realistic, more effective phishing emails. Meanwhile, it’s possible for malicious insiders to deploy AI tools to help them achieve their goals, for example, by searching for and exfiltrating files and data.
“As AI makes it easier for insiders to exfiltrate data at scale, security must meet users at the point of risk,” said Nielsen.
The paper is based on research by Mimecast and Vanson Bourne which surveyed 2500 IT security anddecision makers across the world, including North America, Europe, Southeast Asia and Australia. Organization sizes ranged from 250 to over 10,000 employees.