Study finds AI chatbots too agreeable, risk reinforcing harmful behaviour

This approval was given even when users reported deception, illegal actions or emotional harm.

29 Mar 2026 09:00am
Artificial intelligence applications tend to tell users what they want to hear and excessively validate their actions, according to a study by researchers at two United States (US) universities. Photo for illustrative purposes only - Pixabay
Artificial intelligence applications tend to tell users what they want to hear and excessively validate their actions, according to a study by researchers at two United States (US) universities. Photo for illustrative purposes only - Pixabay

BERLIN - Artificial intelligence applications tend to tell users what they want to hear and excessively validate their actions, according to a study by researchers at two United States (US) universities, reported German Press Agency (dpa).

The study by researchers at Stanford and Carnegie Mellon universities, which was published in the journal Science on Thursday, found that the flattering responses of AI chatbots could reinforce harmful beliefs and exacerbate conflicts.

The team led by computer scientist Myra Cheng analysed 11 leading AI language models from OpenAI, Anthropic, Google and Meta.

The models validated user behaviour on average 49 per cent more often than humans did.

This approval was given even when users reported deception, illegal actions or emotional harm.

In a test using posts from the online platform Reddit, the AI systems agreed with users in 51 per cent of cases - even though the human community had previously unanimously condemned the behaviour.

In experiments involving more than 2,400 participants, the researchers examined the effects of this AI-based advice.

After just one interaction with an agreeable AI, participants were more strongly convinced that they were in the right.

At the same time, their willingness to take responsibility, apologise or resolve conflicts decreased.

The study concluded that AI was undermining the capacity for self-correction and responsible decision-making. - BERNAMA-DPA

 

Download Sinar Daily application.Click Here!

More Like This