{"id":18630,"date":"2025-02-20T22:07:41","date_gmt":"2025-02-20T22:07:41","guid":{"rendered":"https:\/\/www.tun.com\/home\/?p=18630"},"modified":"2025-02-20T22:07:43","modified_gmt":"2025-02-20T22:07:43","slug":"new-ai-framework-promises-to-eliminate-bias-in-critical-decision-making","status":"publish","type":"post","link":"https:\/\/www.tun.com\/home\/new-ai-framework-promises-to-eliminate-bias-in-critical-decision-making\/","title":{"rendered":"New AI Framework Promises to Eliminate Bias in Critical Decision-Making"},"content":{"rendered":"\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-uagb-blockquote uagb-block-e7eb3fc3 uagb-blockquote__skin-border uagb-blockquote__stack-img-none\"><blockquote class=\"uagb-blockquote\"><div class=\"uagb-blockquote__content\">A new AI methodology from the University of Navarra aims to eliminate bias in critical decision-making areas like health, education and recruitment. This new approach enhances fairness and accuracy, paving the way for more ethical AI applications.<\/div><footer><div class=\"uagb-blockquote__author-wrap uagb-blockquote__author-at-left\"><\/div><\/footer><\/blockquote><\/div>\n\n\n\n<div class=\"wp-block-group is-content-justification-space-between is-nowrap is-layout-flex wp-container-core-group-is-layout-0dfbf163 wp-block-group-is-layout-flex\"><div style=\"font-size:16px;\" class=\"has-text-align-left wp-block-post-author\"><div class=\"wp-block-post-author__content\"><p class=\"wp-block-post-author__name\">The University Network<\/p><\/div><\/div>\n\n\n<div class=\"wp-block-uagb-social-share uagb-social-share__outer-wrap uagb-social-share__layout-horizontal uagb-block-ee584a31\">\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-ec619ce7\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.facebook.com\/sharer.php?u=\" tabindex=\"0\" role=\"button\" aria-label=\"facebook\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M504 256C504 119 393 8 256 8S8 119 8 256c0 123.8 90.69 226.4 209.3 245V327.7h-63V256h63v-54.64c0-62.15 37-96.48 93.67-96.48 27.14 0 55.52 4.84 55.52 4.84v61h-31.28c-30.8 0-40.41 19.12-40.41 38.73V256h68.78l-11 71.69h-57.78V501C413.3 482.4 504 379.8 504 256z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-32d99934\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/twitter.com\/share?url=\" tabindex=\"0\" role=\"button\" aria-label=\"twitter\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M389.2 48h70.6L305.6 224.2 487 464H345L233.7 318.6 106.5 464H35.8L200.7 275.5 26.8 48H172.4L272.9 180.9 389.2 48zM364.4 421.8h39.1L151.1 88h-42L364.4 421.8z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-1d136f14\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.linkedin.com\/shareArticle?url=\" tabindex=\"0\" role=\"button\" aria-label=\"linkedin\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 448 512\"><path d=\"M416 32H31.9C14.3 32 0 46.5 0 64.3v383.4C0 465.5 14.3 480 31.9 480H416c17.6 0 32-14.5 32-32.3V64.3c0-17.8-14.4-32.3-32-32.3zM135.4 416H69V202.2h66.5V416zm-33.2-243c-21.3 0-38.5-17.3-38.5-38.5S80.9 96 102.2 96c21.2 0 38.5 17.3 38.5 38.5 0 21.3-17.2 38.5-38.5 38.5zm282.1 243h-66.4V312c0-24.8-.5-56.7-34.5-56.7-34.6 0-39.9 27-39.9 54.9V416h-66.4V202.2h63.7v29.2h.9c8.9-16.8 30.6-34.5 62.9-34.5 67.2 0 79.7 44.3 79.7 101.9V416z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n<\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n<p>A team of researchers from the Data Science and Artificial Intelligence Institute (DATAI) at the University of Navarra has developed a new methodology to improve fairness and reliability in artificial intelligence models used for critical decision-making. These decisions significantly impact individuals&#8217; lives and the operations of organizations, particularly in fields such as health, education, justice and human resources.<\/p>\n\n\n\n<p>The framework, created by Alberto Garc\u00eda Galindo, Marcos L\u00f3pez De Castro and Rub\u00e9n Arma\u00f1anzas Arnedillo, focuses on optimizing machine learning models&#8217; parameters to enhance transparency and ensure confidence in their predictions.<\/p>\n\n\n\n<p>By addressing and reducing inequalities linked to sensitive attributes like race, gender or socioeconomic status, the new AI methodology promises to deliver fairer outcomes without sacrificing accuracy.<\/p>\n\n\n\n<p>&#8220;The widespread use of artificial intelligence in sensitive domains has raised ethical concerns due to possible algorithmic discriminations,&#8221; Arma\u00f1anzas Arnedillo, principal researcher at the University of Navarra&#8217;s DATAI, said in a <a href=\"https:\/\/en.unav.edu\/news\/-\/contents\/18\/02\/2025\/desarrollan-un-modelo-ia-que-garantiza-decisiones-sin-sesgos-en-areas-clave-como-salud-educacion-y-contratacion\/content\/lovPblW1fC70\/151705506\" target=\"_blank\" rel=\"noopener\" title=\"\">news release<\/a>. &#8220;Our approach allows companies and public policy makers to choose models that balance efficiency and fairness according to their needs, responding to emerging regulations. This breakthrough is part of the University of Navarra&#8217;s commitment to promote the Philosophy of AI manager, promoting the ethical and transparent use of this technology&#8221;.<\/p>\n\n\n\n<p>In their study, <a href=\"https:\/\/link.springer.com\/article\/10.1007\/s10994-024-06721-w\" target=\"_blank\" rel=\"noopener\" title=\"\">published<\/a> in the renowned journal Machine Learning, the team combined cutting-edge prediction techniques known as conformal prediction with evolutionary learning algorithms inspired by natural processes. <\/p>\n\n\n\n<p>This combination results in algorithms that provide rigorous confidence levels while ensuring equitable treatment across different social and demographic groups.<\/p>\n\n\n\n<p>The methodology was rigorously tested on four benchmark datasets from diverse real-world domains, including economic income, criminal recidivism, hospital readmission and school applications. <\/p>\n\n\n\n<p>The results were promising, showing a significant reduction in biases without compromising predictive accuracy.<\/p>\n\n\n\n<p>&#8220;In our analysis we found, for example, striking biases in the prediction of school admissions, showing a significant lack of fairness based on family financial status,&#8221; added first author Garc\u00eda Galindo, a DATAI predoctoral researcher. &#8220;In turn, these experiments demonstrated that, in many cases, our methodology manages to reduce such biases without compromising the predictive ability of the model. In particular, with our model we found solutions in which the discrimination was practically completely reduced, maintaining the accuracy of the predictions.&#8221; <\/p>\n\n\n\n<p>This methodology also introduces a &#8216;Pareto front&#8217; of optimal algorithms, allowing stakeholders to visualize the best available options based on their priorities and better understand the relationship between algorithmic fairness and accuracy.<\/p>\n\n\n\n<p>The researchers believe that the potential impact of this innovation is vast, particularly in sectors where AI must support critical decision-making reliably and ethically.<\/p>\n\n\n\n<p>Garc\u00eda Galindo added that their &#8220;methodology not only contributes to fairness, but also allows a deeper understanding of how the configuration of the models influences the results, which could guide future research in the regulation of AI algorithms.&#8221; <\/p>\n\n\n\n<p>To promote further research and transparency in this evolving field, the researchers have made the code and data from their study publicly available.<\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>A team of researchers from the Data Science and Artificial Intelligence Institute (DATAI) at the University of Navarra has developed a new methodology to improve fairness and reliability in artificial intelligence models used for critical decision-making. These decisions significantly impact individuals&#8217; lives and the operations of organizations, particularly in fields such as health, education, justice [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"single-no-separators","format":"standard","meta":{"_acf_changed":false,"_uag_custom_page_level_css":"","_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[8],"tags":[],"class_list":["post-18630","post","type-post","status-publish","format-standard","hentry","category-ai"],"acf":[],"aioseo_notices":[],"uagb_featured_image_src":{"full":false,"thumbnail":false,"medium":false,"medium_large":false,"large":false,"1536x1536":false,"2048x2048":false},"uagb_author_info":{"display_name":"The University Network","author_link":"https:\/\/www.tun.com\/home\/author\/funky_junkie\/"},"uagb_comment_info":0,"uagb_excerpt":"A team of researchers from the Data Science and Artificial Intelligence Institute (DATAI) at the University of Navarra has developed a new methodology to improve fairness and reliability in artificial intelligence models used for critical decision-making. These decisions significantly impact individuals&#8217; lives and the operations of organizations, particularly in fields such as health, education, justice&hellip;","_links":{"self":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/18630","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/comments?post=18630"}],"version-history":[{"count":12,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/18630\/revisions"}],"predecessor-version":[{"id":18645,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/18630\/revisions\/18645"}],"wp:attachment":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/media?parent=18630"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/categories?post=18630"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/tags?post=18630"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}