British and American cybersecurity leaders are increasingly concerned about their expanding AI attack surface, particularly unsanctioned use of AI tools and attempts to corrupt training data, according to new IO research.

The security and compliance specialist polled 3000 IT security leaders on both side of the Atlantic to compile its third annual State of Information Security Report, which was published this morning.

It revealed that just over a quarter (26%) have suffered a data poisoning attack, which occurs when threat actors seek to interfere with model training data in order to alter its behavior.

Such attacks could be launched to sabotage organizations that rely on AI models, or else support threat actors in more targeted ways, such as causing malware-detection systems to misfire.

Data poisoning attacks were hitherto thought to be more theoretical than widespread.

Read more on AI threats: Talent Shortages Bite as 80% of UK Firms Hit with AI Threats

The IO report also revealed that 37% of enterprises are seeing employees use generative AI (GenAI) tools in the enterprise without permission.

This kind of shadow AI can introduce major risks associated with data leakage and compliance infringements, as well as potential vulnerabilities if the GenAI tool in question is not safe.

DeepSeek’s flagship LLM R1 was found earlier this year to contain multiple vulnerabilities. The firm also accidentally exposed a database of chat histories and other sensitive user information.

Concerns and Confidence in the Future

The report’s respondents seemed conflicted over their attitudes to AI. On the one hand, they cited the biggest emerging cybersecurity threats for the coming year as AI-generated phishing (38%) and misinformation (42%), shadow AI (34%) and deepfake impersonation in virtual meetings (28%).

However, incidents of deepfake-related attacks actually fell from 33% last year to 20% this, according to IO.

Moreover, respondents appeared bullish about the future. The vast majority said they feel “prepared” to defend against AI-generated phishing (89%), deepfake impersonation (84%), AI-driven malware (87%) and misinformation (88%), shadow AI (86%) and data poisoning (86%).

Three-quarters (75%) said they are putting in place acceptable usage policies for AI, which should at least help mitigate unsanctioned use of tools.

Chris Newton-Smith, CEO of IO, described AI as a double-edged sword.

“While it offers enormous promise, the risks are evolving just as fast as the technology itself. Too many organizations rushed in and are now paying the price,” he added.

“Data poisoning attacks, for example, don’t just undermine technical systems, but they threaten the integrity of the services we rely on. Add shadow AI to the mix, and it’s clear we need stronger governance to protect both businesses and the public.”