AI Customer Service Chatbots make up company policies and make a mess

On Monday, developers using the popular AI-powered code editor cursor noticed something weird: Switching them between machines and logging them in immediately, breaking a common workflow for programmers using multiple devices. When users contacted cursor support, an agent named “Sam” told them that this was the expected behavior under the new policy. But there is no such policy, Sam is a robot. The AI model has developed this policy, sparking a wave of complaints and canceled threats recorded on hacker news and Reddit.
This marks the potential business damage caused by the latest instance of AI modifications, also known as “illusion”. The interlude is a “creative fill” response in which AI models invent sound-sounding but wrong messages. Rather than acknowledging uncertainty, AI models prioritize creating reasonable, confident responses, even if it means creating information from scratch.
For companies that deploy these systems in the role of a customer without human supervision, the consequences can be direct and expensive: frustrated customers, damaged trust, and, in the case of a cursor, potentially unsubscribed.
How it unfolds
The event begins when the Reddit user whose Reddit user name is Brokentoasteroven notices that the cursor session unexpectedly terminates when it is exchanged between the desktop, laptop and the remote development box.
“Login the cursor on a computer and the session is invalid immediately,” Brokentoasteroven wrote in a message. “This is an important UX regression.”
Confused and frustrated, the user wrote an email to the cursor for support and quickly received a reply from SAM: “Cursor is designed to use one device for each subscription as a core security feature,” the email reply. The response sounds certain, official, and the user does not suspect that Sam is not human.
After the initial Reddit post, users use the post as a formal confirmation of actual policy changes, a habit that many programmers are critical to their day-to-day work. “Multi-device workflow is a table bet for developers,” one user wrote.
Soon after, some users publicly announced their cancellation of their subscription to Reddit, and believed that the policy was the reason. “I actually just canceled my submarine,” the original Reddit poster wrote, adding that their workplace is now “completely cleared”. Others joined: “Yes, I canceled it too, it’s asinine.” Not long after, the host locked the Reddit thread and deleted the original post.
“Hey! We don’t have such a policy,” reads Reddit’s reply three hours later. “Of course, you can use the cursor on multiple machines. Unfortunately, this is the error response from frontline AI-enabled robots.”
Artificial Intelligence Pavilion as a business risk
Cursor crashes recall a similar plot in February 2024 when Air Canada was ordered to commemorate the refund policy invented by its chatbot. In that incident, Jake Moffatt contacted Air Canada’s support after his grandmother’s death, and the airline’s AI agents mistakenly told him that he could book regular flights and apply for bereavement retroactively. When Air Canada later denied his refund request, the company argued that “the chatbot is an independent legal entity responsible for its own actions.” The Canadian court rejected the defense, ruling that the company was responsible for the information provided by its AI tools.
Instead of questioning the liability as Air Canada does, the cursor acknowledged the error and took measures to modify it. Cursor co-founder Michael Truell then apologized for Hacker News because they were confused about the policy that did not exist and explained that users have been returned, and the issue was caused by a backend change designed to improve meeting security that inadvertently caused session invalidation issues for some users.
“Now, any AI response for email support is explicitly marked like this,” he added. “We use AI-assisted response as the first filter for email support.”
Nevertheless, the incident raised questions about user disclosures, as many people who interacted with Sam clearly believed it was human. “It’s obviously deceptive to pretend to be a human LLM (you named Sam!) and not labeled as such a tag,” one user wrote on Hacker News.
When the cursor fixes technical errors, the episode shows the risks of deploying AI models in customer-oriented roles without proper assurance and transparency. For companies that sell AI productivity tools to developers, having their own AI support systems has invented a policy to alienate their core users, representing a particularly awkward self-wound.
One user wrote in Hacker News: “Somewhat ironically, it’s really hard to say that hallucinations are no longer a big problem.
This story originally appeared in ARS Technica.