From creating ChatGPT emails to AI systems that recommend TV shows and even help diagnose diseases, the presence of machine intelligence in everyday life is no longer science fiction.
And yet, despite all the promises of speed, accuracy and optimization, discomfort remains. Some people love using AI tools. Others feel anxious, suspicious, and even betrayed by them. Why?
But many artificial intelligence systems operate like black boxes: you enter something and a solution appears. The logic between them is hidden. Psychologically it is unnerving. We like to see cause and effect, and we like to question decisions. When we can't do this, we feel powerless.
This is one of the reasons for the so-called algorithm aversion. This term popularized Marketing researcher Berkeley Dietvorst and his colleagues, whose research has shown that people often prefer poor human judgment to algorithmic decision-making, especially after witnessing at least one algorithmic error.
We know that it is rational that AI systems do not have emotions or plans. But this does not prevent us from projecting them onto artificial intelligence systems. When ChatGPT responds with “too polite”, it scares some users. When a recommendation engine becomes too precise, it feels intrusive. We begin to suspect manipulation, although the system has no self.
This is a form of anthropomorphism, that is, attributing human intentions to non-human systems. Communication professors Clifford Nass and Byron Reeves and others. demonstrated that we react socially to machines even though we know they are not human.
One interesting finding from behavioral science is that we are often more forgiving of human errors than machine errors. When a person makes a mistake, we understand it. We might even sympathize. But when an algorithm makes a mistake, especially if it was presented as objective or data-driven, we feel betrayed.
This is a link to research on violation of expectationwhen our assumptions about how something “should” behave are violated. This causes discomfort and loss of confidence. We believe that machines are logical and impartial. So when they fail, such as misclassifying an image, providing biased results, or recommending something completely inappropriate, our reactions become more intense. We expected more.
Irony? People make bad decisions all the time. But at least we can ask them, “Why?”
We hate it when AI gets it wrong
For some, AI is not just unfamiliar, it is existentially unsettling. Teachers, writers, lawyers, and designers are suddenly faced with tools that copy parts of their work. It's not just about automation, it's about what makes our skills valuable and what it means to be human.
This may activate a form of identity threat, concept explored social psychologist Claude Steele and others. It describes the fear that one's experience or uniqueness is being diminished. Result? Resistance, defensiveness, or complete disregard for technology. Mistrust in this case is not a mistake, but a psychological defense mechanism.
Thirst for emotional cues
Human trust is not only based on logic. We read tone, facial expressions, hesitation and eye contact. AI has none of this. It can be fluent and even charming. But it doesn't calm us down the way another person might.
It's similar to the uncanny valley discomfort, a term coined by Japanese roboticist Masahiro Mori to describe the eerie feeling when something is almost human, but not quite. It looks or sounds right, but something is wrong. This emotional absence can be interpreted as coldness or even deception.
In a world full of deepfakes and algorithmic decisions, the lack of emotional resonance becomes a problem. Not because the AI is doing something wrong, but because we don't know how to feel about it.
It's important to say: not all suspicions about AI are irrational. It has been shown that algorithms reflect and reinforce biasespecially in areas such as recruitment, law enforcement and credit scoring. If data systems have previously harmed you or disadvantaged you, you are not being paranoid, you are being cautious.
This ties into a larger psychological idea: learned distrust. When institutions or systems repeatedly fail certain groups, skepticism becomes not only reasonable, but defensive.
Telling people to “trust the system” rarely works. Trust must be earned. This means developing AI tools that are transparent, pollable, and accountable. This means giving users freedom of action, not just convenience. Psychologically, we trust what we understand, what we can question, and what treats us with respect.
If we want AI to be accepted, it needs to feel less like a black box and more like a conversation that we are invited to join.
This edited article is republished from Talk under Creative Commons license. Read original article.






