
April 20, 2026 – AI systems don’t just process information; they systematically “judge” people in ways that resemble human trust, but with important differences, according to a new study by researchers at the Hebrew University of Jerusalem (HU).
According to the new study in Proceedings of the Royal Society A by Prof. Yaniv Dover and Valeria Lerman of the Hebrew University Business School, the reason is both reassuring and deeply unsettling.
Like humans, these systems favor competence and integrity, yet they do so in a more rigid, rule-based, and often more extreme way. Their judgments can also be more consistently biased across demographic traits and vary significantly between models.
The bottom line: AI can mimic the structure of human judgment, but it does not think like humans, and that gap matters when these systems are used to make real decisions about people.
Drawing on more than 43,000 simulated decisions alongside nearly 1,000 human participants, the research reveals that today’s most advanced AI systems, including models similar to ChatGPT and Google’s Gemini, make judgments about people and, in doing so, appear to form something that looks a lot like “trust.”
But that effective trust doesn’t work quite like ours.
The study placed both humans and AI in familiar situations: deciding how much money to lend a small-business owner, whether to trust a babysitter, how to rate a boss, or how much to donate to a nonprofit founder.
Across these scenarios, a clear pattern emerged. Both humans and AI favored people who seemed competent, honest, and well-intentioned. In other words, the machines appeared to grasp the basic ingredients of trust: competence, integrity, and benevolence, much like we do.
“That’s the good news,” says Prof. Dover. “AI is not making random decisions. It captures something real about how humans evaluate one another.”
But the resemblance stops there, and the results are striking.
Humans tend to form a general impression, blending multiple traits into a single, intuitive, and holistic judgment to answer the question, “Is this a good person?”
AI breaks people down into components, scoring competence, integrity, and kindness, almost like separate columns in a spreadsheet. The result is a more rigid, “by-the-book” style of judgment, consistent, but less human.
“People in our study are messy and holistic in how they judge others,” explains Valeria Lerman. “AI is cleaner, more systematic, and that can lead to very different outcomes.”
Troubling Bias Patterns Emerged
Nevertheless, a troubling pattern of amplified bias emerged.
In financial scenarios, such as deciding how much money to lend or donate, AI systems showed consistent and sometimes sizable differences based solely on demographic traits. These differences appeared even when every other detail about the person was identical.
For example:
- Older individuals were frequently given more favorable outcomes, though in some cases the opposite pattern appeared.
- Religion also had a significant effect on the outcomes, especially the monetary ones.
- Gender also influenced decisions in certain models and scenarios.
“Humans have biases, of course,” says Prof. Dover. “But what surprised us is that AI’s biases can be more systematic, more predictable, and sometimes stronger.”
Different Models, Different Judgments
Another key insight: there is no single “AI opinion.”
Different models often made different judgments about the same person. In some cases, one system rewarded a trait that another penalized. That means the choice of the AI system can shape real-world outcomes.
“Which model you use really matters,” Lerman notes. “Two systems can look similar on the surface but behave very differently when making decisions about people.”
AI is already being used to screen job candidates, assess creditworthiness, recommend medical actions, and guide organizational decisions. As these systems move from assistants to decision-makers, understanding how they “think” becomes critical.
The researchers emphasize that their findings are not a warning against AI, but rather a call for awareness.
“These systems are powerful,” says Prof. Dover. “They can model aspects of human reasoning in a consistent way. But they are not human, and we shouldn’t assume they see people the way we do.”
As AI becomes more embedded in everyday life, the question is no longer whether we trust machines. It’s whether we understand how they trust us.
The research paper titled “A closer look at how large language models ‘trust’ humans: patterns and biases” is now available in Proceedings of the Royal Society A and can be accessed here.
Researchers:
Valeria Lerman1, Yaniv Dover1,2,3
Institutions:
- The Hebrew University Business School
- The Federmann Center for the study of Rationality, The Hebrew University of Jerusalem
- The Department for Cognitive and Brain Sciences, The Hebrew University of Jerusalem



