A study using a fictitious AI-driven dating site suggests people do not always want to peek inside the AI “black box.” Instead, their desire for explanations depends on whether the system meets, exceeds or disappoints their expectations.
Artificial intelligence is often described as a mysterious black box, but new research suggests people do not always want to look inside. Instead, their appetite for explanations about how AI works depends heavily on whether the system behaves the way they expect.
In an experiment built around a fictitious dating platform, a research team that included Penn State scholars found that users’ trust in an AI system — and their desire to understand it — shifted when the system met, exceeded or fell short of what it promised.
The findings, published online ahead of their appearance in the April 2026 issue of Computers in Human Behavior, could help designers in fields from health care to finance decide when to offer simple reassurance and when to provide deeper, tailored explanations.
“AI can create all kinds of soul searching for people — especially in sensitive personal domains like online dating,” co-author S. Shyam Sundar, an Evan Pugh University Professor and James P. Jimirro Professor of Media Ethics in the Penn State Donald P. Bellisario College of Communications, said in a news release.
To explore how people respond when AI meets or violates expectations, the researchers created smartmatch.com, a fake dating website powered by a fabricated algorithm. They recruited 227 single adults in the United States and asked them to use the site as if they were real users.
Participants answered typical dating questions about their interests and what they look for in a partner. The site then told them it would show 10 potential matches on a “Discover Page” and that it “normally generates five ‘Top Picks’ for each user.”
Behind the scenes, the researchers randomly assigned each person to one of nine conditions that varied how many “Top Picks” they actually saw and how the site framed those results. Some participants saw the promised five top matches, along with a message confirming that five options was the norm. Others saw only two or as many as 10, accompanied by a message explaining that while five was typical, the system had found fewer or more this time.
Those mismatches can trigger self-doubt, according to lead author Yuan Sun, an assistant professor in the University of Florida’s College of Journalism and Communications, who completed her doctorate at Penn State with Sundar as her advisor.
“If someone expect five matches, but get two or 10, then a user may think they’ve done something wrong or that something is wrong with them,” Sun said in the news release.
Participants could then choose to request more information about how their results were generated and were asked to rate how much they trusted the system.
The pattern that emerged was striking. When the site delivered exactly what it had promised — five top picks — users tended to trust the system and felt little need to dig into how it worked. When it overdelivered, offering more matches than expected, a brief explanation that clarified the surprise was enough to boost trust. But when the system underdelivered, showing fewer matches than promised, users wanted more detailed, substantive explanations before they were willing to trust the algorithm.
That nuance is missing from many current conversations about AI design, noted Sun.
“Many developers talk about making AI more transparent and understandable by providing specific information,” she said. “There is far less discussion about when those explanations are necessary and how much should be presented. That’s the gap we’re interested in filling.”
The study builds on decades of research into what communication scholars call expectancy violations — moments when someone or something behaves differently than we anticipate. Co-author Joseph B. Walther, the Bertelsen Presidential Chair in Technology and Society and distinguished professor of communication at the University of California, Santa Barbara, has long studied how people react when other people break social expectations.
In human relationships, unexpected behavior often leads us to judge the other person more harshly or more favorably, and to decide whether to move closer or pull away. Asking someone directly why they acted a certain way can feel intrusive or awkward.
“Being able to find out ‘why the surprise?’ is a luxury and source of satisfaction,” Walther added. “But it appears that we’re unafraid to ask the intelligent machine for an explanation.”
That willingness to interrogate algorithms could be powerful — if the explanations are actually helpful. The researchers noted that many apps and platforms already offer some form of AI explanation, but often in dense, technical language buried in terms and conditions.
Sundar, who directs the Penn State Center for Socially Responsible Artificial Intelligence and co-directs the Media Effects Research Laboratory, noted those approaches tend to fall flat.
“Tons of studies show that these explanations don’t work well. They’re not effective in the goal of transparency to enhance user experience and trust,” he said. “No one really benefits. It’s due diligence rather than being socially responsible.”
The new study suggests that simply making AI more accurate or more generous is not enough to win over users. Even when the dating site delivered more matches than promised, people still wanted to know why.
Sun noted the team initially assumed that strong performance would speak for itself.
“Good is good, so we thought people would be satisfied with face value, but they weren’t. They were curious,” she said. “It’s not just performance; it’s transparency. Higher transparency gives people more understanding of the system, leading to higher trust.”
That insight has broad implications as AI systems increasingly influence what we see, what we buy and even what medical or financial options we are offered. In high-stakes settings, a mismatch between what users expect and what the system delivers could erode trust or lead people to ignore useful recommendations — unless the AI can explain itself in the right way at the right time.
The researchers argue that companies need to move beyond generic disclosures and one-size-fits-all explanations.
“We can’t just say there’s information in the terms and conditions, and that absolves us,” Sun added. “We need more user-centered, tailored explanations to help people better understand AI systems when they want it and in a way that meets their needs. This study opens the door to more research that could help achieve that.”
Co-author Mengqi “Maggie” Liao, an assistant professor in advertising at the University of Georgia who completed her doctorate at Penn State, also contributed to the project.
For students and everyday users, the takeaway is twofold: It is reasonable to expect AI systems to be clear about what they are doing, and it is equally important to pay attention to how your own expectations shape your trust. When an app surprises you — whether by giving you far less or far more than you anticipated — that may be the moment to pause, ask for an explanation and decide whether the system deserves your confidence.
Source: Pennsylvania State University

