Albany Times Union/hearst Newspapers | Hearst Newspapers | Getty Images
Artificial intelligence may be the future of customer service, but some early consumer reviews suggest that, at least for now, you should prepare to be annoyed.
AI-powered chatbots can act as virtual concierges steering wayward customers to the right resolution, but many customer service chatbots still deflect rather than resolve issues. Outright request refusals — or sending customers into a maze of AI-powered ambiguity that leaves them too exasperated to continue a complaint — are still common in the chatbot playbook.
“I hate AI customer service chatbots,” said Carmen Smith of Campo, California, who said she often ends up in an endless loop when dealing with the technology. “It seems that no matter what, they all will either point you to some type of FAQs list or repeat information you’ve already tried and found lacking,” Smith said. “I hate dealing with them, but a lot of companies use them nowadays, alas. I’d rather speak to a human being.”
Smith is not alone. Nearly one in five consumers who have used AI for customer service saw no benefit from the experience, according to the Qualtrics 2026 Customer Experience Trends Report. That figure — a failure rate almost four times higher than for AI use in general — points to something specific about customer service that makes it harder for AI to get right. Consumers rank AI applications for customer service among the worst for convenience, time savings, and usefulness. “Too many companies are deploying AI to cut costs, not solve problems, and customers can tell the difference,” said Isabelle Zdatny, head of thought leadership at Qualtrics XM Institute and the author of the report.
There’s a simple business reason why the experience for many customers has not been a positive one. “AI doesn’t change corporate incentives — it scales them,” said Ben Wiener, global head of Cognizant Moment, the digital experience practice of global technology and consulting firm Cognizant.
‘Relentlessly optimize’
Companies have always shaped customer service around what they measure and what they reward. Inside many customer contact centers, human agents operate within tightly scripted flows designed to limit discretion. In others, brands empower employees to do what it takes to keep customers happy.
“If leadership prioritizes minimizing refunds, reducing escalation to humans, or shortening call times, you can expect AI agents to reflect that philosophy in the experience — in the same way a human agent would. These were always business choices, and AI systems will enforce those choices too,” Wiener said, adding that AI can do it more consistently and at higher volumes. “AI will relentlessly optimize whatever metric it is given,” Wiener said. “Businesses need to be explicit about what outcomes they want their AI systems to prioritize, because those systems will deliver exactly what they are trained and measured to achieve,” he added.
“What bothers them is automation that traps them in a loop,” said Shannon McKeen, professor of practice and executive director of the Center for Analytics Impact at Wake Forest University School of Business. Research on support automation shows that many conversations with AI still eventually escalate to humans. But when systems cannot resolve the issue or clearly explain a decision, customers often experience the AI layer as an additional barrier rather than a solution, McKeen said.
Deflection has its advantages for humans that work in customer service.
According to Terra Higginson, principal research director at Info-Tech Research Group, AI deflection is justified when it is used to protect workers in jobs that have high burnout rates and turnover, and are associated with mental health issues.
And in some cases, saying no is the right decision.
“If two people are arguing about a refund and the law says it is not available, a judge would adjudicate rather than argue continuously back and forth. That is often what happens in agent-to-unhappy-customer scenarios,” Higginson said. “This makes the process about enforcing rules and regulations rather than making refunds difficult,” Higginson said, adding that AI can enforce rules consistently across the board in a way humans may not be able to do, “without the arguing and back-and-forth strain of being yelled at for following company rules.”
Making legitimate refunds hard to secure, on the other hand, is just bad business, and always has been. “That is obstruction, not service,” Higginson said. It is an especially bad business model in a competitive market where digital opinions can spread quickly through forums and social media, Higginson added.
Consumer-facing chatbots are here to stay
Tom Eggemeier, CEO of Zendesk, says too many companies define “resolved” interactions in ways that includes deflections and non-answers. Zendesk only counts a resolution if the customer, the business, and the employee all agree the problem was actually solved. “AI is a means, not an end,” Eggemeier said.
One solution that he thinks is likely in the not-too-distant future is for consumers to have a personal AI agent to deal with company chatbots, allowing the AIs to duke it out to resolve low-level issues.
Consumers may need the help.
Eggemeier estimates that within three years, 50 percent of digital customer service interactions will be handled by AI and that will rise to 80 percent within five years.
Jesse Zhang, CEO of customer service chatbot creator Decagon — which tripled its valuation to $4.5 billion in a recent funding round after it signed over 100 enterprise deals in 2025 across consumer-facing industries — says that companies aiming to deflect customers will lose money in the long run.
“We have not come across a single customer with the intention of deflection,” Zhang said. “People are very aggressive about optimizing for resolution,” he added.
Sierra, the conversational AI platform founded in 2023 by former Salesforce co-CEO Bret Taylor and ex-Google executive Clay Bavor, says that its business model uses “outcomes-based pricing” and it thinks that is a key way to approach these new interactions. “If we are not resolving the issue, if it doesn’t work for customers, then it doesn’t work for us,” said a Sierra spokeswoman.
Zhang conceded that from the customer side there can be subjectivity around the topic, and one person’s resolution is another’s deflection. But he said it is a company’s job to have AI that is smart enough to make a judgment call. “You can’t say no to everything, you can’t say yes to everything. You want to have a solution,” Zhang said.
What there should never be is a dead end, but rather, an “escalation path” for customers who aren’t getting what they need from AI answers.

There are circumstances in which a clear, quick path to a human agent should always exist, for example, elderly customers, VIP customers, or particularly complex problems.
A widely cited example of AI chatbot implementation is at fintech Klarna, where AI played a significant role, though was not the only factor, in a recent headcount reduction of 40%. But the AI-first policy shift ultimately resulted in the company deciding to rehire some workers in customer service after lower-quality performance from AI technology on some more complex tasks. A company spokeswoman recently told CNBC in a statement that Klarna remains committed to AI usage, having launched an AI assistant which did the work of 700 customer service agents at launch, and which has now increased to 800 agents. The AI assistant is now picking up more customer enquiries and its customer satisfaction scores are on par with human agents, the spokeswoman said.
The market is evolving quickly, and that can lead to a range of customer experiences. “Sometimes consumers don’t know the difference between an old-fashioned chatbot and AI. Old-fashioned chatbots can’t do things and resolve problems,” the Sierra spokeswoman said. There are also some brands that are “super cautious” in introducing new AI chatbot tec and put so many guardrails on AI agents the judgment needed to resolve issues isn’t even available. “They do that because they are nervous that it will make mistakes, but the guardrails have to be reasonable,” she said.
More complex AI use cases exist at the sector by sector level as well, like in health care. At NotifyMD, AI has been used in customer service to deal with some of the simpler issues, like responding to customer calls about billing. But Jodi Miller, senior vice president of sales, said with anything complex and emotional, humans still count. “There’s just no way for the AI to bring the kind of understanding and empathy that a human being can bring to the table, especially if the customer is upset or has a legitimate problem,” Miller said. “I think the key for all of these companies, going forward, will be to be very mindful of the use of AI and make sure that it’s helping, not hindering, the people who need the help the most,” Miller added.
Zhang is convinced that the future of customer service will be AI, and that AI agent will have memory and will be able to handle customer service inquiries of all types. “At a very high level, every business will have AI on the front end with one unified agent across all channels,” he said.

