{"id":32325,"date":"2025-12-12T21:26:29","date_gmt":"2025-12-12T21:26:29","guid":{"rendered":"https:\/\/www.tun.com\/home\/?p=32325"},"modified":"2025-12-12T21:26:32","modified_gmt":"2025-12-12T21:26:32","slug":"study-smarter-ai-explanations-help-doctors-read-cancer-scans","status":"publish","type":"post","link":"https:\/\/www.tun.com\/home\/study-smarter-ai-explanations-help-doctors-read-cancer-scans\/","title":{"rendered":"Study: Smarter AI Explanations Help Doctors Read Cancer Scans"},"content":{"rendered":"\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-uagb-blockquote uagb-block-e7eb3fc3 uagb-blockquote__skin-border uagb-blockquote__stack-img-none\"><blockquote class=\"uagb-blockquote\"><div class=\"uagb-blockquote__content\">New research from Stevens Institute of Technology finds that artificial intelligence can sharpen doctors\u2019 breast cancer image diagnoses \u2014 but only when its explanations are designed to support, not overload, clinicians.<\/div><footer><div class=\"uagb-blockquote__author-wrap uagb-blockquote__author-at-left\"><\/div><\/footer><\/blockquote><\/div>\n\n\n\n<div class=\"wp-block-group is-content-justification-space-between is-nowrap is-layout-flex wp-container-core-group-is-layout-0dfbf163 wp-block-group-is-layout-flex\"><div style=\"font-size:16px;\" class=\"has-text-align-left wp-block-post-author\"><div class=\"wp-block-post-author__content\"><p class=\"wp-block-post-author__name\">The University Network<\/p><\/div><\/div>\n\n\n<div class=\"wp-block-uagb-social-share uagb-social-share__outer-wrap uagb-social-share__layout-horizontal uagb-block-ee584a31\">\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-ec619ce7\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.facebook.com\/sharer.php?u=\" tabindex=\"0\" role=\"button\" aria-label=\"facebook\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M504 256C504 119 393 8 256 8S8 119 8 256c0 123.8 90.69 226.4 209.3 245V327.7h-63V256h63v-54.64c0-62.15 37-96.48 93.67-96.48 27.14 0 55.52 4.84 55.52 4.84v61h-31.28c-30.8 0-40.41 19.12-40.41 38.73V256h68.78l-11 71.69h-57.78V501C413.3 482.4 504 379.8 504 256z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-32d99934\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/twitter.com\/share?url=\" tabindex=\"0\" role=\"button\" aria-label=\"twitter\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M389.2 48h70.6L305.6 224.2 487 464H345L233.7 318.6 106.5 464H35.8L200.7 275.5 26.8 48H172.4L272.9 180.9 389.2 48zM364.4 421.8h39.1L151.1 88h-42L364.4 421.8z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-1d136f14\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.linkedin.com\/shareArticle?url=\" tabindex=\"0\" role=\"button\" aria-label=\"linkedin\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 448 512\"><path d=\"M416 32H31.9C14.3 32 0 46.5 0 64.3v383.4C0 465.5 14.3 480 31.9 480H416c17.6 0 32-14.5 32-32.3V64.3c0-17.8-14.4-32.3-32-32.3zM135.4 416H69V202.2h66.5V416zm-33.2-243c-21.3 0-38.5-17.3-38.5-38.5S80.9 96 102.2 96c21.2 0 38.5 17.3 38.5 38.5 0 21.3-17.2 38.5-38.5 38.5zm282.1 243h-66.4V312c0-24.8-.5-56.7-34.5-56.7-34.6 0-39.9 27-39.9 54.9V416h-66.4V202.2h63.7v29.2h.9c8.9-16.8 30.6-34.5 62.9-34.5 67.2 0 79.7 44.3 79.7 101.9V416z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n<\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n<p>Artificial intelligence is already helping doctors spot cancer earlier and more accurately. But new research from Stevens Institute of Technology suggests that how AI explains its decisions can make the difference between life-saving support and dangerous distraction.<\/p>\n\n\n\n<p>In two recent studies focused on breast cancer imaging, Stevens researchers found that AI tools can improve how accurately oncologists and radiologists read medical images. At the same time, they discovered that piling on extra explanations about how the AI reached its conclusions can actually slow clinicians down, increase their mental workload and, in some cases, make them more likely to make mistakes.<\/p>\n\n\n\n<p>The work zeroes in on a central tension in modern medicine: how to harness the power of AI while keeping human experts firmly in control.<\/p>\n\n\n\n<p>AI systems are already widely used to scan X-rays, MRIs and CT images for subtle patterns that can be hard for humans to see. With access to large medical datasets and powerful computing, these systems can sift through data at a speed no person can match.<\/p>\n\n\n\n<p>\u201cAI systems can process thousands of images quickly and provide predictions much faster than human reviewers,\u201d senior author Onur Asan, an associate professor at Stevens whose research focuses on how people interact with technology in health care, said in a news release. \u201cUnlike humans, AI does not get tired or lose focus over time.\u201d<\/p>\n\n\n\n<p>Yet many clinicians remain wary of relying on AI. A major reason is that many systems function as a so-called \u201cblack box,\u201d offering a prediction or risk score without a clear explanation of how they got there.<\/p>\n\n\n\n<p>\u201cWhen clinicians don\u2019t know how AI generates its predictions, they are less likely to trust it,\u201d added Asan. \u201cSo we wanted to find out whether providing extra explanations may help clinicians, and how different degrees of AI explainability influence diagnostic accuracy, as well as trust in the system.\u201d<\/p>\n\n\n\n<p>To explore that question, Asan worked with doctoral student Olya Rezaeian at Stevens and assistant professor Alparslan Emrah Bayrak at Lehigh University. The team studied 28 oncologists and radiologists as they used an AI system to analyze breast cancer images.<\/p>\n\n\n\n<p>All of the clinicians saw AI-generated assessments of the images. Some also received additional layers of explanation about how the AI arrived at its conclusions. After reviewing the images, participants answered questions about how confident they were in the AI\u2019s assessment and how difficult they found the task.<\/p>\n\n\n\n<p>The researchers reported that clinicians who used AI were more accurate overall than those in a control group who did not have AI support. But the benefits came with important conditions.<\/p>\n\n\n\n<p>One key finding: more explanation was not always better.<\/p>\n\n\n\n<p>\u201cWe found that more explainability doesn\u2019t equal more trust,\u201d Asan added.<\/p>\n\n\n\n<p>The team observed that when explanations became more detailed or complex, clinicians had to spend more time processing that information. That extra effort pulled attention away from the images themselves and slowed decision-making, which in turn hurt overall performance.<\/p>\n\n\n\n<p>\u201cProcessing more information adds more cognitive workload to clinicians. It also makes them more likely to make mistakes and possibly harm the patient,\u201d added Asan. \u201cYou don&#8217;t want to add cognitive load to the users by adding more tasks.\u201d<\/p>\n\n\n\n<p>The studies also flagged a different kind of risk: overconfidence in AI. In some cases, clinicians trusted the system\u2019s output so strongly that they were less likely to question it, even when it was wrong.<\/p>\n\n\n\n<p>\u201cIf an AI system is not designed well and makes some errors while users have high confidence in it, some clinicians may develop a blind trust believing that whatever the AI is suggesting is true, and not scrutinize the results enough,\u201d Asan added.<\/p>\n\n\n\n<p>The team\u2019s findings are detailed in two papers: one on the impact of AI explanations on trust and diagnostic accuracy in breast cancer, <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0003687025001139\" target=\"_blank\" rel=\"noopener\" title=\"\">published in Applied Ergonomics<\/a>, and a second on explainability and AI confidence in clinical decision support systems, <a href=\"https:\/\/www.tandfonline.com\/doi\/abs\/10.1080\/10447318.2025.2539458\" target=\"_blank\" rel=\"noopener\" title=\"\">published in the International Journal of Human\u2013Computer Interaction<\/a>.<\/p>\n\n\n\n<p>Together, the studies point to a design challenge for the next generation of medical AI: build tools that are transparent enough to be trustworthy, but simple enough to be usable in the high-pressure environment of clinical care.<\/p>\n\n\n\n<p>\u201cOur findings suggest that designers should exercise caution when building explanations into the AI systems,\u201d Asan said. <\/p>\n\n\n\n<p>He added that explanations should be crafted so they support clinicians\u2019 thinking instead of overwhelming them.<\/p>\n\n\n\n<p>Training will also be critical. Asan emphasized that AI should be seen as an assistant, not a replacement, for human expertise.<\/p>\n\n\n\n<p>\u201cClinicians who use AI should receive training that emphasizes interpreting the AI outputs and not just trusting it,\u201d he said.<\/p>\n\n\n\n<p>Beyond explainability, Asan pointed to a broader principle from technology adoption research: people are most likely to use a tool when they believe it is both helpful and easy to use.<\/p>\n\n\n\n<p>\u201cResearch finds that there are two main parameters for a person to use any form of technology \u2014 perceived usefulness and perceived ease of use,\u201d he added. \u201cSo if doctors will think that this tool is useful for doing their job, and it\u2019s easy to use, they are going to use it.\u201d<\/p>\n\n\n\n<p>For patients, the stakes are high. Breast cancer remains one of the most common cancers worldwide, and early, accurate detection can dramatically improve outcomes. AI has the potential to catch tumors earlier, reduce missed diagnoses and help standardize care across hospitals and clinics.<\/p>\n\n\n\n<p>But the Stevens research underscores that simply adding AI to the reading room is not enough. To truly improve care, systems must be designed with clinicians\u2019 cognitive limits and workflows in mind, and health systems must invest in training that helps doctors understand when to lean on AI and when to question it.<\/p>\n\n\n\n<p>As AI continues to spread through medicine, studies like these offer a roadmap: build tools that amplify human judgment, avoid overloading already stretched clinicians and keep patient safety at the center of every design choice.<\/p>\n\n\n\n<div style=\"height:12px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><strong>Source: <\/strong><a href=\"https:\/\/www.stevens.edu\/news\/studies-investigate-how-ai-can-aid-clinicians-in-analyzing-medical-images\" target=\"_blank\" rel=\"noopener\" title=\"\">Stevens Institute of Technology<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>New research from Stevens Institute of Technology finds that artificial intelligence can sharpen doctors\u2019 breast cancer image diagnoses \u2014 but only when its explanations are designed to support, not overload, clinicians.<\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"single-no-separators","format":"standard","meta":{"_acf_changed":false,"_uag_custom_page_level_css":"","_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[8],"tags":[357],"class_list":["post-32325","post","type-post","status-publish","format-standard","hentry","category-ai","tag-stevens-institute-of-technology"],"acf":[],"aioseo_notices":[],"uagb_featured_image_src":{"full":false,"thumbnail":false,"medium":false,"medium_large":false,"large":false,"1536x1536":false,"2048x2048":false},"uagb_author_info":{"display_name":"The University Network","author_link":"https:\/\/www.tun.com\/home\/author\/funky_junkie\/"},"uagb_comment_info":0,"uagb_excerpt":"New research from Stevens Institute of Technology finds that artificial intelligence can sharpen doctors\u2019 breast cancer image diagnoses \u2014 but only when its explanations are designed to support, not overload, clinicians.","_links":{"self":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/32325","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/comments?post=32325"}],"version-history":[{"count":6,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/32325\/revisions"}],"predecessor-version":[{"id":32423,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/32325\/revisions\/32423"}],"wp:attachment":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/media?parent=32325"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/categories?post=32325"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/tags?post=32325"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}