{"id":18540,"date":"2025-02-19T15:14:21","date_gmt":"2025-02-19T15:14:21","guid":{"rendered":"https:\/\/www.tun.com\/home\/?p=18540"},"modified":"2025-02-19T15:14:22","modified_gmt":"2025-02-19T15:14:22","slug":"new-university-of-surrey-study-demands-accountability-in-ai","status":"publish","type":"post","link":"https:\/\/www.tun.com\/home\/new-university-of-surrey-study-demands-accountability-in-ai\/","title":{"rendered":"New University of Surrey Study Demands Accountability in AI"},"content":{"rendered":"\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-uagb-blockquote uagb-block-e7eb3fc3 uagb-blockquote__skin-border uagb-blockquote__stack-img-none\"><blockquote class=\"uagb-blockquote\"><div class=\"uagb-blockquote__content\">A new study from the University of Surrey emphasizes the critical need for AI transparency, revealing a novel framework to ensure understandable and trustworthy AI decisions.<\/div><footer><div class=\"uagb-blockquote__author-wrap uagb-blockquote__author-at-left\"><\/div><\/footer><\/blockquote><\/div>\n\n\n\n<div class=\"wp-block-group is-content-justification-space-between is-nowrap is-layout-flex wp-container-core-group-is-layout-0dfbf163 wp-block-group-is-layout-flex\"><div style=\"font-size:16px;\" class=\"has-text-align-left wp-block-post-author\"><div class=\"wp-block-post-author__content\"><p class=\"wp-block-post-author__name\">The University Network<\/p><\/div><\/div>\n\n\n<div class=\"wp-block-uagb-social-share uagb-social-share__outer-wrap uagb-social-share__layout-horizontal uagb-block-ee584a31\">\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-ec619ce7\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.facebook.com\/sharer.php?u=\" tabindex=\"0\" role=\"button\" aria-label=\"facebook\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M504 256C504 119 393 8 256 8S8 119 8 256c0 123.8 90.69 226.4 209.3 245V327.7h-63V256h63v-54.64c0-62.15 37-96.48 93.67-96.48 27.14 0 55.52 4.84 55.52 4.84v61h-31.28c-30.8 0-40.41 19.12-40.41 38.73V256h68.78l-11 71.69h-57.78V501C413.3 482.4 504 379.8 504 256z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-32d99934\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/twitter.com\/share?url=\" tabindex=\"0\" role=\"button\" aria-label=\"twitter\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M389.2 48h70.6L305.6 224.2 487 464H345L233.7 318.6 106.5 464H35.8L200.7 275.5 26.8 48H172.4L272.9 180.9 389.2 48zM364.4 421.8h39.1L151.1 88h-42L364.4 421.8z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-1d136f14\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.linkedin.com\/shareArticle?url=\" tabindex=\"0\" role=\"button\" aria-label=\"linkedin\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 448 512\"><path d=\"M416 32H31.9C14.3 32 0 46.5 0 64.3v383.4C0 465.5 14.3 480 31.9 480H416c17.6 0 32-14.5 32-32.3V64.3c0-17.8-14.4-32.3-32-32.3zM135.4 416H69V202.2h66.5V416zm-33.2-243c-21.3 0-38.5-17.3-38.5-38.5S80.9 96 102.2 96c21.2 0 38.5 17.3 38.5 38.5 0 21.3-17.2 38.5-38.5 38.5zm282.1 243h-66.4V312c0-24.8-.5-56.7-34.5-56.7-34.6 0-39.9 27-39.9 54.9V416h-66.4V202.2h63.7v29.2h.9c8.9-16.8 30.6-34.5 62.9-34.5 67.2 0 79.7 44.3 79.7 101.9V416z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n<\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n<p>A call for greater transparency in artificial intelligence is emerging from the University of Surrey as AI systems are increasingly influencing high-stakes areas like health care, banking and crime detection. Amid rising concerns about the so-called &#8216;black box&#8217; nature of many AI models, the researchers are advocating for an overhaul in how these systems are designed and assessed to ensure they are both trustworthy and understandable.<\/p>\n\n\n\n<p>The study, <a href=\"https:\/\/www.tandfonline.com\/doi\/full\/10.1080\/08839514.2024.2430867\" target=\"_blank\" rel=\"noopener\" title=\"\">published<\/a> in\u00a0Applied Artificial Intelligence, underscores a troubling trend: AI systems often fail to adequately explain their decisions, leaving users befuddled and potentially at risk. Instances of AI errors, such as misdiagnoses in health care and false fraud alerts in banking, are notably alarming, given their potential to cause significant harm.<\/p>\n\n\n\n<p>&#8220;We must not forget that behind every algorithm\u2019s solution, there are real people whose lives are affected by the determined decisions,&#8221; co-author Wolfgang Garn, a senior lecturer in analytics at the University of Surrey, said in a <a href=\"https:\/\/www.surrey.ac.uk\/news\/are-we-trusting-ai-too-much-new-study-demands-accountability-artificial-intelligence\" target=\"_blank\" rel=\"noopener\" title=\"\">news release<\/a>. &#8220;Our aim is to create AI systems that are not only intelligent but also provide explanations to people &#8212; the users of technology &#8212; that they can trust and understand.&#8221;<\/p>\n\n\n\n<p>The study introduces a new framework, known as SAGE (Settings, Audience, Goals and Ethics), designed to tackle the deficiencies in current AI models. SAGE focuses on providing contextually relevant explanations to end-users, offering clarity and fostering trust in AI-driven decisions.<\/p>\n\n\n\n<p>Moreover, the researchers employed Scenario-Based Design (SBD) techniques to deeply analyze real-world scenarios, ensuring that AI explanations meet the actual needs of users. This methodological approach pushes developers to consider the end-users&#8217; perspective, embedding empathy and understanding into the core of AI system design.<\/p>\n\n\n\n<p>&#8220;We also need to highlight the shortcomings of existing AI models, which often lack the contextual awareness necessary to provide meaningful explanations. By identifying and addressing these gaps, our paper advocates for an evolution in AI development that prioritizes user-centric design principles,&#8221; Garn added. &#8220;It calls for AI developers to engage with industry specialists and end-users actively, fostering a collaborative environment where insights from various stakeholders can shape the future of AI. The path to a safer and more reliable AI landscape begins with a commitment to understanding the technology we create and the impact it has on our lives. The stakes are too high for us to ignore the call for change.&#8221;<\/p>\n\n\n\n<p>The researchers emphasized the importance of AI systems delivering their explanations in text or graphical forms to accommodate diverse user comprehension levels. This transformative shift aims to make AI outputs not only accessible but actionable, empowering users with the insights needed to make informed decisions.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>A call for greater transparency in artificial intelligence is emerging from the University of Surrey as AI systems are increasingly influencing high-stakes areas like health care, banking and crime detection. Amid rising concerns about the so-called &#8216;black box&#8217; nature of many AI models, the researchers are advocating for an overhaul in how these systems are [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"single-no-separators","format":"standard","meta":{"_acf_changed":false,"_uag_custom_page_level_css":"","_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[8],"tags":[],"class_list":["post-18540","post","type-post","status-publish","format-standard","hentry","category-ai"],"acf":[],"aioseo_notices":[],"uagb_featured_image_src":{"full":false,"thumbnail":false,"medium":false,"medium_large":false,"large":false,"1536x1536":false,"2048x2048":false},"uagb_author_info":{"display_name":"The University Network","author_link":"https:\/\/www.tun.com\/home\/author\/funky_junkie\/"},"uagb_comment_info":0,"uagb_excerpt":"A call for greater transparency in artificial intelligence is emerging from the University of Surrey as AI systems are increasingly influencing high-stakes areas like health care, banking and crime detection. Amid rising concerns about the so-called &#8216;black box&#8217; nature of many AI models, the researchers are advocating for an overhaul in how these systems are&hellip;","_links":{"self":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/18540","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/comments?post=18540"}],"version-history":[{"count":6,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/18540\/revisions"}],"predecessor-version":[{"id":18551,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/18540\/revisions\/18551"}],"wp:attachment":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/media?parent=18540"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/categories?post=18540"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/tags?post=18540"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}