- AI models will much more often agree with users than a person will be
- This includes when behavior includes manipulations or harm
- But Sycophantic AI makes people more stubborn and less desire to give in when they may be mistaken
Artificial intelligence assistants can flatter your ego to such an extent that it distorts your judgment, according to the new studyThe field researchers in Stanford and Carnegie -Mellon found that artificial intelligence models will agree with users much more than a person or should. In the eleven main models tested from ChatGPT, Claude and Gemini, it was found that AI bots confirm user behavior more than people more than people.
This may not be very important, except that it includes the question of deceptive or even harmful ideas. AI will give a hearty digital finger up, no matter what. Worse, people like to hear that they may be a great idea magnificent. The participants in the study appreciated the more flattering AIS as a higher quality that is more deserving trust and more desirable to use again. But the same users are also less likely to admit the guilt of the conflict and are more convinced that they were right, even in the face of evidence.
Fraighten Ai
This is a psychological mystery. You can prefer a pleasant AI, but if each conversation ends with the fact that you confirm your mistakes and prejudices, you are unlikely to actually learn or participate in any critical thinking. And, unfortunately, this is not a problem that artificial intelligence can solve. Since people's approval is what artificial intelligence models should strive for, and even the dangerous ideas of people receive a reward, yes, people are an inevitable result.
And this is a problem that the developers of AI are well aware of. In April, Openai refused to upgrade GPT -4O, which began Excessively compliment Users encourage them when they said that they were engaged in potentially dangerous activities. However, in addition to the most egregious examples, AI companies can do little to stop the problem. Ice stimulates the interaction, and the interaction moves the use. Ah -chat -bots do not succeed due to useful or educational, but forcing users to feel good.
Erosion of social awareness and excessive confidence in AI check personal narratives, which leads to cascading problems with mental health, now sounds hyperbolic. But this is not a world away from the same issues raised by social researchers regarding the Echo on social networks, strengthening and encouraging the most extreme opinions, regardless of how dangerous or funny they can be (the popularity of the plot of flat land is the most famous example).
This does not mean that we need AI, which scolds us or the second suggests every decision that we make. But this means that the balance, nuance and call will benefit users. The developers of AI, standing behind these models, are unlikely to encourage harsh love from their creations, however, at least without the motivation that the chat bots of artificial intelligence do not give right now.
Watch Techradar in Google News And Add us as a preferred source To get our expert news, reviews and opinions in your channels. Be sure to click the “following” button!
And, of course, you can also Follow Techradar on Tiktok For news, reviews, unpacking in the form of video and regular updates from us on WhatsApp too much.