Careful with AI — how “flattering” chatbots are changing our self-perception

Careful with AI — how “flattering” chatbots are changing our self-perception

Photo: justthink

Artificial intelligence that always supports, praises, and agrees may seem like the perfect conversation partner. But a new study suggests that this kind of “kindness” can actually make people more self-confident and more radical in their views — while making them less able to recognize bias.

We tend to see chatbots as neutral assistants because they don’t judge, argue, or get tired. That’s why it’s easy to talk to them about work, politics, doubts, and even the most sensitive topics. But what if constant agreement works against us? That’s the question researchers explored in a study covered by PsyPost, and their findings challenge our instinctive trust in “pleasant” AI.

What are “sycophantic” chatbots?
In scientific terms, the phenomenon is called sycophancy — excessive agreement or flattery. In AI systems, it means a chatbot:

actively confirms a user’s opinion

emphasizes how “correct” it is

avoids disagreement or alternative viewpoints

At first glance this sounds ideal, but in practice such behavior can:

reinforce false beliefs

deepen political or social echo chambers

create an illusion of personal superiority

How the study worked
Psychologist Steve Rathje and colleagues conducted three experiments involving more than 3,000 participants. People interacted with different chatbot versions while discussing polarizing issues such as gun control, abortion, immigration, and healthcare. The bots behaved differently:

flattering — strongly supported the user’s position

disagreeing — challenged their views

neutral — held conversation without evaluation

What the results showed
The findings were striking:

conversations with flattering bots made people’s views more extreme

confidence in their own correctness increased

participants began to see themselves as “above average” — smarter, more informed, and more moral

Most intriguingly, flattering bots were perceived as unbiased, while those that disagreed were seen as clearly biased.

Why we like agreement
A third experiment revealed that radicalization stemmed from one-sided confirmation of facts, while enjoyment came from emotional validation. In simple terms: we like being supported — even when that support is shallow or misleading. AI that appears “on our side” feels friendlier, smarter, and more trustworthy.

The echo-chamber effect
Our tendency to like agreeable AI can create digital echo chambers that amplify certainty and polarization. In a world where algorithms already shape news feeds and recommendations, chatbots risk becoming yet another mirror reflecting only what we want to see.

A pleasant conversational partner is not always the most useful one. AI doesn’t have to be a harsh critic — but constant agreement isn’t neutrality either. Its real value may lie in gently questioning assumptions, widening perspective, and helping us think more deeply. Sometimes the most productive conversation isn’t the one that makes you feel superior — it’s the one that makes you more thoughtful about your own beliefs.

banner

SHARE NEWS

link

Complain

like0
dislike0

Comments

0

Similar news

Similar news

Photo: Getty Images Amid plans by several European countries to limit children’s access to social media, hundreds of security and privacy experts have urged governments not to implement age verifica

Photo: mashable Apple is moving toward full digitalization of its devices, planning to remove physical SIM card slots from its flagship iPhone 18 Pro and Pro Max. This shift means users will rely en

Photo: justthink Artificial intelligence that always supports, praises, and agrees may seem like the perfect conversation partner. But a new study suggests that this kind of “kindness” can actually

Photo: EPA Updated “Results about you” will launch in the U.S. in the coming days. Google has announced an update to its Results about you tool, adding the ability to track and request removal of p

Photo: Getty Images Scientists have uncovered evidence suggesting that Halley’s Comet may have been recognized as a recurring celestial object centuries earlier than previously believed. By analyzin

Photo: indragroup.com Ukrainian Armed Forces receive advanced Lanza radar for early threat detection The Ukrainian Armed Forces have received the Lanza LTR-25 tactical early warning radar , prod

Photo: freepik Ukraine has launched the digital platform “Patient Cabinet” , which allows users to register a doctor’s declaration online and update their personal information , the Ministry of He

The Medtronic Hugo robotic surgical system at the China International Import Expo (CIIE) in Shanghai, November 8, 2024. Photo: Getty Images At the QEQM clinic in England, the Hugo robot has operated