A survey of 3,001 cybersecurity and information security managers in the U.S. and the United Kingdom (UK) published today finds more than a quarter (26%) work for organizations that have been victimized by data poisoning of artificial intelligence (AI) models.

Conducted by Censuswide on behalf of ISMS.online, a provider of a platform for ensuring data privacy and compliance, the survey also finds that 20% of organizations reported experiencing deepfake or cloning incidents in the last 12 months.

More than half of organizations (54%) concede that their organization deployed AI technologies too quickly and are now struggling to scale it back or implement it more responsibly. A total of 39% also identified securing AI and machine learning technologies as a top challenge. More than half (52%) also said AI and machine learning are hindering those efforts.

Survey Surfaces Rising Tide of Cyberattacks Involving AI

Survey respondents also identified AI-generated misinformation and disinformation as an emerging threat their organizations will encounter for the next 12 months (42%), followed closely by generative AI-driven phishing (38%), shadow AI misuse (37%) and deepfake impersonations (28%).

On the plus side, 79% of respondents are using AI, machine learning, or blockchain technologies to improve security, with nearly all having plans to invest in GenAI-powered threat detection and defense (96%), AI governance and policy enforcement (95%) and deepfake detection and validation tools (94%).

ISMS.online CEO Chris Newton-Smith said the survey makes it apparent that cyberattacks involving AI technologies are now becoming commonplace as adversaries move beyond experimenting with new tactics and techniques. In response, organizations are now investing more in technologies to counter these threats simply because at this juncture, there is no putting the AI genie back in the bottle, he added.

The challenge is that 42% of respondents said their organization continues to struggle with a cybersecurity skills gap. Additionally, 29% said their organization was impacted by a malware infestation (29%), while 27% reported some type of cloud breach. About a third of organizations have had employee (35%), customer (34%), financial (32%), research (32%), product (29%) and intellectual property (25%) data compromised over the past year. Only a fifth (20%) say that no data loss occurred in the period and only 29% say they did not receive a fine for a data breach or violation of data protection rules in the past 12 months.

Chances are, of course, high that additional fines will be levied as organizations look to operationalize AI. Like it or not, adoption of both sanctioned and unsanctioned AI tools is occurring faster than cybersecurity teams can keep pace, noted Newton-Smith. Cybercriminals, meanwhile, now view AI tools and platforms as a new set of potential rich target opportunities that at this point are still lightly defended, he added.

Each organization will need to determine what level of spending will be required to secure AI investments, but as the overall size of the AI attack surface continues to expand, so will the costs. The only thing left to determine now is how fast the clock is ticking in the race against time to prevent what might easily become a cataclysmic event, given the level of scale AI enables.