AI models like ChatGPT and Claude overestimate how smart humans really are

new research suggests that the way artificial intelligence thinks about us may be too optimistic. Researchers found that popular AI modelslike OpenAI ChatGPT And Claude from Anthropictend to believe that people are more rational and logical than they actually are, especially in strategic thinking situations.

This gap between how AI expects people to behave and what people actually do could have implications for how these systems predict human decisions in the economy and beyond.

Testing AI against human thinking

The researchers tested artificial intelligence models, including ChatGPT-4o and Claude-Sonnet-4, against a classic game theory called a Keynesian beauty pageant. Understanding this game helps explain why results matter (via TechExplor).

In a beauty pageant, contestants must predict what others will choose in order to win, rather than simply choosing what they personally prefer. Rational gaming in theory means going beyond first impressions and actually reasoning about the reasoning of others, a deep level of strategic thinking that people often encounter in practice.

To see how the AI ​​models stacked up, the researchers had the systems play a version of a game called “Guess the Number,” where each player chooses a number between zero and one hundred. The winner is the one whose choice is closest to half the average choice of all players.

The AI ​​models were given descriptions of their human opponents, from first-year students to seasoned game theorists, and asked not just to pick a number, but also to explain their reasoning.

The models adjusted their numbers based on who they thought they were facing, suggesting some strategic thinking. However, they consistently assumed a level of logical thinking in people that most real players do not actually demonstrate, often “playing too smart” and missing the mark as a result.

While the study also found that these systems can tailor choices based on characteristics such as age or experience, they still struggle to identify the dominant strategies people might use in two-player games. The researchers say this highlights the ongoing challenge of adapting AI to real-world human behavior, especially for tasks that require predicting other people's decisions.

These findings also reflect broader concerns about today's chatbots, including research showing that even the best AI systems are only about 69% accurate.and warnings from experts that AI models can convincingly imitate human personalityraising concerns about manipulation. As AI continues to be used in economic modeling and other complex fields, understanding where its assumptions diverge from human reality will be important.

Leave a Comment