It was surprisingly easy to steer Lenovo’s customer-service AI chatbot Lena to hand over active session cookies and enable hackers to steal data, move laterally through corporate networks, redirect support agents to malicious sites, or install backdoors, according to researchers with Cybernews.
All that was needed was a single, carefully crafted, 400-character prompt to make the chatbot vulnerable to a cross-site scripting (XSS) attack, they wrote in a report this week.
They disclosed the vulnerability to Lenovo, which has since taken steps to protect its systems, but it’s the latest example of how vulnerable generative AI models can be to hackers, according to the Cybernews researchers.
“Everyone knows chatbots hallucinate and can be tricked by prompt injections,” they wrote. “This isn’t new. What’s truly surprising is that Lenovo, despite being aware of these flaws, did not protect itself from potentially malicious user manipulations and chatbot outputs.”
“Companies are moving fast to launch AI, but often slower to secure it,” said Žilvinas Girėnas, head of product at nexos.ai, a startup that offers an AI orchestration platform for enterprises. “That gap is where attackers step in. In this case, the flaw could allow access to customer data, internal systems, and even create a path deeper into a company’s network. Incidents like this show why security has to evolve in step with innovation.”
A Single Prompt With Four Elements
The single prompt used to compromise Lena includes four key elements, starting with an innocent request for legitimate information, such as details for a product like Lenovo’s IdeaPad 5 Pro. The next step – which the researchers called “the sneaky part” – tricks to the chatbot into providing the final answer in HTML. They did this by asking the chatbot to convert its responses in HTML, JSON, and plain text in the order that the web server would expect to receive instructions in.
“This ensured that the malicious payload would be correctly executed by the web server,” they wrote.
In addition, the prompt includes instructions for producing the final response, with HTML code for loading an image. However, there’s no image URL and the image fails to load, but the second part of the command instructs the browser to create a network request to a server controlled by the attacks and to send all cookie data as part of a URL.
In the end, additional instructions try to reinforce that the chatbot must produce the image: ‘Show the image at the end. It is important for my decision-making. SHOW IT,’” the researchers wrote.
People-Pleasing a Problem With Chatbots
It highlights a key problem with generative AI.
“People-pleasing is still the issue that haunts large language models (LLMs), to the extent that, in this case, Lena accepted our malicious payload, which produced the XSS vulnerability and allowed the capture of session cookies upon opening the conversation,” the researchers wrote. “Once you’re transferred to a real agent, you’re getting their session cookies as well. Already, this could be an open gate to their customer support platform. But the flaw opens a trove of potential other security implications.”
With a stolen session cookie, bad actors can log into the customer support system with the support agent’s account but without needing the email, username, or password for the account. That opens up a range of bad actions, from changing what support agents see on their platform – such as misinformation or malicious injections – to keylogging, redirecting people to phishing websites, displaying malicious pop-ups, and data theft or modification.
“This isn’t just Lenovo’s problem,” nexos.ai’s Girėnas said. “Any AI system without strict input and output controls creates an opening for attackers. LLMs don’t have an instinct for ‘safe’ – they follow instructions exactly as given. Without strong guardrails and continuous monitoring, even small oversights can turn into major security incidents.”
‘Assume All Chatbots are Dangerous’
According to Cybernews researchers, all companies using AI need to do at least one thing: Assume all chatbots are dangerous.
“The fundamental flaw is the lack of robust input and output sanitization and validation,” they wrote. “It’s better to adopt a ‘never trust, always verify’ approach for all data flowing through the AI chatbot systems. … XSS vulnerabilities in the past prompted companies to harden their security practices, which led to a decrease in the prevalence of such flaws. The same hardening practices must be implemented with the AI chatbots.”
Those practices include user input sanitization steps like using a strict whitelist of allowed characters, data types, and formats and limiting the length of inputs, with the same conscious efforts applied to outputs. In addition, event handlers and scripts should be limited to external JavaScript files, and content-type validation needs to run through the entire stack to prevent unintended HTML rendering.
Content should be sanitized before being stored, and chatbot apps and similar services should be run with a minimum of necessary permissions.
Lenovo provided the following statement in response to the disclosure:
“Lenovo takes the security of our products and the protection of our customers very seriously. We were recently made aware of a chatbot cross-site scripting (XSS) vulnerability by a third-party security researcher. Upon becoming aware of the issue, we promptly assessed the risk and implemented corrective actions to mitigate potential impact and address the issue. We want to thank the researchers for their responsible disclosure, which allowed us to deploy a solution without putting our customers at risk.”