A new study from the University of Surrey emphasizes the critical need for AI transparency, revealing a novel framework to ensure understandable and trustworthy AI decisions.
A call for greater transparency in artificial intelligence is emerging from the University of Surrey as AI systems are increasingly influencing high-stakes areas like health care, banking and crime detection. Amid rising concerns about the so-called ‘black box’ nature of many AI models, the researchers are advocating for an overhaul in how these systems are designed and assessed to ensure they are both trustworthy and understandable.
The study, published in Applied Artificial Intelligence, underscores a troubling trend: AI systems often fail to adequately explain their decisions, leaving users befuddled and potentially at risk. Instances of AI errors, such as misdiagnoses in health care and false fraud alerts in banking, are notably alarming, given their potential to cause significant harm.
“We must not forget that behind every algorithm’s solution, there are real people whose lives are affected by the determined decisions,” co-author Wolfgang Garn, a senior lecturer in analytics at the University of Surrey, said in a news release. “Our aim is to create AI systems that are not only intelligent but also provide explanations to people — the users of technology — that they can trust and understand.”
The study introduces a new framework, known as SAGE (Settings, Audience, Goals and Ethics), designed to tackle the deficiencies in current AI models. SAGE focuses on providing contextually relevant explanations to end-users, offering clarity and fostering trust in AI-driven decisions.
Moreover, the researchers employed Scenario-Based Design (SBD) techniques to deeply analyze real-world scenarios, ensuring that AI explanations meet the actual needs of users. This methodological approach pushes developers to consider the end-users’ perspective, embedding empathy and understanding into the core of AI system design.
“We also need to highlight the shortcomings of existing AI models, which often lack the contextual awareness necessary to provide meaningful explanations. By identifying and addressing these gaps, our paper advocates for an evolution in AI development that prioritizes user-centric design principles,” Garn added. “It calls for AI developers to engage with industry specialists and end-users actively, fostering a collaborative environment where insights from various stakeholders can shape the future of AI. The path to a safer and more reliable AI landscape begins with a commitment to understanding the technology we create and the impact it has on our lives. The stakes are too high for us to ignore the call for change.”
The researchers emphasized the importance of AI systems delivering their explanations in text or graphical forms to accommodate diverse user comprehension levels. This transformative shift aims to make AI outputs not only accessible but actionable, empowering users with the insights needed to make informed decisions.

