{"id":33424,"date":"2026-01-21T16:11:00","date_gmt":"2026-01-21T16:11:00","guid":{"rendered":"https:\/\/www.tun.com\/home\/?p=33424"},"modified":"2026-01-23T18:53:08","modified_gmt":"2026-01-23T18:53:08","slug":"when-do-we-want-ai-to-explain-itself-dating-study-offers-clues","status":"publish","type":"post","link":"https:\/\/www.tun.com\/home\/when-do-we-want-ai-to-explain-itself-dating-study-offers-clues\/","title":{"rendered":"When Do We Want AI to Explain Itself? Dating Study Offers Clues"},"content":{"rendered":"\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-uagb-blockquote uagb-block-e7eb3fc3 uagb-blockquote__skin-border uagb-blockquote__stack-img-none\"><blockquote class=\"uagb-blockquote\"><div class=\"uagb-blockquote__content\">A study using a fictitious AI-driven dating site suggests people do not always want to peek inside the AI \u201cblack box.\u201d Instead, their desire for explanations depends on whether the system meets, exceeds or disappoints their expectations.<\/div><footer><div class=\"uagb-blockquote__author-wrap uagb-blockquote__author-at-left\"><\/div><\/footer><\/blockquote><\/div>\n\n\n\n<div class=\"wp-block-group is-content-justification-space-between is-nowrap is-layout-flex wp-container-core-group-is-layout-0dfbf163 wp-block-group-is-layout-flex\"><div style=\"font-size:16px;\" class=\"has-text-align-left wp-block-post-author\"><div class=\"wp-block-post-author__content\"><p class=\"wp-block-post-author__name\">The University Network<\/p><\/div><\/div>\n\n\n<div class=\"wp-block-uagb-social-share uagb-social-share__outer-wrap uagb-social-share__layout-horizontal uagb-block-ee584a31\">\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-ec619ce7\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.facebook.com\/sharer.php?u=\" tabindex=\"0\" role=\"button\" aria-label=\"facebook\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M504 256C504 119 393 8 256 8S8 119 8 256c0 123.8 90.69 226.4 209.3 245V327.7h-63V256h63v-54.64c0-62.15 37-96.48 93.67-96.48 27.14 0 55.52 4.84 55.52 4.84v61h-31.28c-30.8 0-40.41 19.12-40.41 38.73V256h68.78l-11 71.69h-57.78V501C413.3 482.4 504 379.8 504 256z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-32d99934\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/twitter.com\/share?url=\" tabindex=\"0\" role=\"button\" aria-label=\"twitter\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M389.2 48h70.6L305.6 224.2 487 464H345L233.7 318.6 106.5 464H35.8L200.7 275.5 26.8 48H172.4L272.9 180.9 389.2 48zM364.4 421.8h39.1L151.1 88h-42L364.4 421.8z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-1d136f14\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.linkedin.com\/shareArticle?url=\" tabindex=\"0\" role=\"button\" aria-label=\"linkedin\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 448 512\"><path d=\"M416 32H31.9C14.3 32 0 46.5 0 64.3v383.4C0 465.5 14.3 480 31.9 480H416c17.6 0 32-14.5 32-32.3V64.3c0-17.8-14.4-32.3-32-32.3zM135.4 416H69V202.2h66.5V416zm-33.2-243c-21.3 0-38.5-17.3-38.5-38.5S80.9 96 102.2 96c21.2 0 38.5 17.3 38.5 38.5 0 21.3-17.2 38.5-38.5 38.5zm282.1 243h-66.4V312c0-24.8-.5-56.7-34.5-56.7-34.6 0-39.9 27-39.9 54.9V416h-66.4V202.2h63.7v29.2h.9c8.9-16.8 30.6-34.5 62.9-34.5 67.2 0 79.7 44.3 79.7 101.9V416z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n<\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n<p>Artificial intelligence is often described as a mysterious black box, but new research suggests people do not always want to look inside. Instead, their appetite for explanations about how AI works depends heavily on whether the system behaves the way they expect.<\/p>\n\n\n\n<p>In an experiment built around a fictitious dating platform, a research team that included Penn State scholars found that users\u2019 trust in an AI system \u2014 and their desire to understand it \u2014 shifted when the system met, exceeded or fell short of what it promised.<\/p>\n\n\n\n<p>The findings, <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S074756322500322X\" target=\"_blank\" rel=\"noopener\" title=\"\">published<\/a> online ahead of their appearance in the April 2026 issue of Computers in Human Behavior, could help designers in fields from health care to finance decide when to offer simple reassurance and when to provide deeper, tailored explanations.<\/p>\n\n\n\n<p>\u201cAI can create all kinds of soul searching for people \u2014 especially in sensitive personal domains like online dating,\u201d co-author S. Shyam Sundar, an Evan Pugh University Professor and James P. Jimirro Professor of Media Ethics in the Penn State Donald P. Bellisario College of Communications, said in a news release.<\/p>\n\n\n\n<p>To explore how people respond when AI meets or violates expectations, the researchers created smartmatch.com, a fake dating website powered by a fabricated algorithm. They recruited 227 single adults in the United States and asked them to use the site as if they were real users.<\/p>\n\n\n\n<p>Participants answered typical dating questions about their interests and what they look for in a partner. The site then told them it would show 10 potential matches on a \u201cDiscover Page\u201d and that it \u201cnormally generates five \u2018Top Picks\u2019 for each user.\u201d<\/p>\n\n\n\n<p>Behind the scenes, the researchers randomly assigned each person to one of nine conditions that varied how many \u201cTop Picks\u201d they actually saw and how the site framed those results. Some participants saw the promised five top matches, along with a message confirming that five options was the norm. Others saw only two or as many as 10, accompanied by a message explaining that while five was typical, the system had found fewer or more this time.<\/p>\n\n\n\n<p>Those mismatches can trigger self-doubt, according to lead author Yuan Sun, an assistant professor in the University of Florida\u2019s College of Journalism and Communications, who completed her doctorate at Penn State with Sundar as her advisor.<\/p>\n\n\n\n<p>\u201cIf someone expect five matches, but get two or 10, then a user may think they\u2019ve done something wrong or that something is wrong with them,\u201d Sun said in the news release.<\/p>\n\n\n\n<p>Participants could then choose to request more information about how their results were generated and were asked to rate how much they trusted the system.<\/p>\n\n\n\n<p>The pattern that emerged was striking. When the site delivered exactly what it had promised \u2014 five top picks \u2014 users tended to trust the system and felt little need to dig into how it worked. When it overdelivered, offering more matches than expected, a brief explanation that clarified the surprise was enough to boost trust. But when the system underdelivered, showing fewer matches than promised, users wanted more detailed, substantive explanations before they were willing to trust the algorithm.<\/p>\n\n\n\n<p>That nuance is missing from many current conversations about AI design, noted Sun.<\/p>\n\n\n\n<p>\u201cMany developers talk about making AI more transparent and understandable by providing specific information,\u201d she said. \u201cThere is far less discussion about when those explanations are necessary and how much should be presented. That\u2019s the gap we\u2019re interested in filling.\u201d<\/p>\n\n\n\n<p>The study builds on decades of research into what communication scholars call expectancy violations \u2014 moments when someone or something behaves differently than we anticipate. Co-author Joseph B. Walther, the Bertelsen Presidential Chair in Technology and Society and distinguished professor of communication at the University of California, Santa Barbara,\u00a0has long studied how people react when other people break social expectations.<\/p>\n\n\n\n<p>In human relationships, unexpected behavior often leads us to judge the other person more harshly or more favorably, and to decide whether to move closer or pull away. Asking someone directly why they acted a certain way can feel intrusive or awkward.<\/p>\n\n\n\n<p>\u201cBeing able to find out \u2018why the surprise?\u2019 is a luxury and source of satisfaction,\u201d Walther added. \u201cBut it appears that we\u2019re unafraid to ask the intelligent machine for an explanation.\u201d<\/p>\n\n\n\n<p>That willingness to interrogate algorithms could be powerful \u2014 if the explanations are actually helpful. The researchers noted that many apps and platforms already offer some form of AI explanation, but often in dense, technical language buried in terms and conditions.<\/p>\n\n\n\n<p>Sundar, who directs the Penn State Center for Socially Responsible Artificial Intelligence and co-directs the Media Effects Research Laboratory, noted those approaches tend to fall flat.<\/p>\n\n\n\n<p>\u201cTons of studies show that these explanations don\u2019t work well. They\u2019re not effective in the goal of transparency to enhance user experience and trust,\u201d he said. \u201cNo one really benefits. It\u2019s due diligence rather than being socially responsible.\u201d<\/p>\n\n\n\n<p>The new study suggests that simply making AI more accurate or more generous is not enough to win over users. Even when the dating site delivered more matches than promised, people still wanted to know why.<\/p>\n\n\n\n<p>Sun noted the team initially assumed that strong performance would speak for itself.<\/p>\n\n\n\n<p>\u201cGood is good, so we thought people would be satisfied with face value, but they weren\u2019t. They were curious,\u201d she said. \u201cIt\u2019s not just performance; it\u2019s transparency. Higher transparency gives people more understanding of the system, leading to higher trust.\u201d<\/p>\n\n\n\n<p>That insight has broad implications as AI systems increasingly influence what we see, what we buy and even what medical or financial options we are offered. In high-stakes settings, a mismatch between what users expect and what the system delivers could erode trust or lead people to ignore useful recommendations \u2014 unless the AI can explain itself in the right way at the right time.<\/p>\n\n\n\n<p>The researchers argue that companies need to move beyond generic disclosures and one-size-fits-all explanations.<\/p>\n\n\n\n<p>\u201cWe can\u2019t just say there\u2019s information in the terms and conditions, and that absolves us,\u201d Sun added. \u201cWe need more user-centered, tailored explanations to help people better understand AI systems when they want it and in a way that meets their needs. This study opens the door to more research that could help achieve that.\u201d<\/p>\n\n\n\n<p>Co-author Mengqi \u201cMaggie\u201d Liao, an assistant professor in advertising at the University of Georgia who completed her doctorate at Penn State, also contributed to the project.<\/p>\n\n\n\n<p>For students and everyday users, the takeaway is twofold: It is reasonable to expect AI systems to be clear about what they are doing, and it is equally important to pay attention to how your own expectations shape your trust. When an app surprises you \u2014 whether by giving you far less or far more than you anticipated \u2014 that may be the moment to pause, ask for an explanation and decide whether the system deserves your confidence.<\/p>\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><strong>Source:<\/strong> <a href=\"https:\/\/www.psu.edu\/news\/research\/story\/explain-or-not-need-ai-transparency-depends-user-expectation\" target=\"_blank\" rel=\"noopener\" title=\"\">Pennsylvania State University<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>A study using a fictitious AI-driven dating site suggests people do not always want to peek inside the AI \u201cblack box.\u201d Instead, their desire for explanations depends on whether the system meets, exceeds or disappoints their expectations.<\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"single-no-separators","format":"standard","meta":{"_acf_changed":false,"_uag_custom_page_level_css":"","_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[8],"tags":[140,164],"class_list":["post-33424","post","type-post","status-publish","format-standard","hentry","category-ai","tag-penn-state-university","tag-uc-santa-barbara"],"acf":[],"aioseo_notices":[],"uagb_featured_image_src":{"full":false,"thumbnail":false,"medium":false,"medium_large":false,"large":false,"1536x1536":false,"2048x2048":false},"uagb_author_info":{"display_name":"The University Network","author_link":"https:\/\/www.tun.com\/home\/author\/funky_junkie\/"},"uagb_comment_info":0,"uagb_excerpt":"A study using a fictitious AI-driven dating site suggests people do not always want to peek inside the AI \u201cblack box.\u201d Instead, their desire for explanations depends on whether the system meets, exceeds or disappoints their expectations.","_links":{"self":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/33424","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/comments?post=33424"}],"version-history":[{"count":6,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/33424\/revisions"}],"predecessor-version":[{"id":33524,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/33424\/revisions\/33524"}],"wp:attachment":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/media?parent=33424"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/categories?post=33424"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/tags?post=33424"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}