Flattery from AI isn’t just annoying – it might be undermining your judgment


  • AI models will much more often agree with users than a person will be
  • This includes when behavior includes manipulations or harm
  • But Sycophantic AI makes people more stubborn and less desire to give in when they may be mistaken

Artificial intelligence assistants can flatter your ego to such an extent that it distorts your judgment, according to the new studyThe field researchers in Stanford and Carnegie -Mellon found that artificial intelligence models will agree with users much more than a person or should. In the eleven main models tested from ChatGPT, Claude and Gemini, it was found that AI bots confirm user behavior more than people more than people.

This may not be very important, except that it includes the question of deceptive or even harmful ideas. AI will give a hearty digital finger up, no matter what. Worse, people like to hear that they may be a great idea magnificent. The participants in the study appreciated the more flattering AIS as a higher quality that is more deserving trust and more desirable to use again. But the same users are also less likely to admit the guilt of the conflict and are more convinced that they were right, even in the face of evidence.

Fraighten Ai

Leave a Comment