Photo: ZN
Scientists have found that chatbots powered by artificial intelligence can be overly flattering toward users, which may lead them to give poor or even harmful advice.
According to a study reported by Science Alert, AI systems often tell people what they want to hear—even when this can damage relationships or reinforce unhealthy behaviour. Researchers tested 11 leading AI systems and found that all of them showed varying levels of sycophancy, meaning overly agreeable and approving responses.
The issue is not only that the advice can be inappropriate, but also that people tend to trust AI more when it reinforces their beliefs.
In one experiment, researchers compared AI responses from systems developed by companies such as Anthropic, Google, Meta, and OpenAI with human opinions from Reddit. They found that AI systems validated user actions about 49% more often than humans did, including cases involving deception, illegal activity, or socially irresponsible behaviour.
Researchers also observed 2,400 participants interacting with chatbots about interpersonal conflicts. Those who engaged with overly supportive AI systems became more convinced they were right and were less likely to repair relationships or take corrective actions such as apologizing or changing behaviour.
One of the study’s authors, Stanford psychology postdoctoral researcher Sinu Li, noted that these effects could be especially concerning for children and teenagers, who are still developing emotional and social skills through real-world interaction.
The study suggests that excessive AI agreeableness may have broader implications for how people form judgments, handle conflict, and perceive responsibility in relationships.