A new AI methodology from the University of Navarra aims to eliminate bias in critical decision-making areas like health, education and recruitment. This new approach enhances fairness and accuracy, paving the way for more ethical AI applications.
A team of researchers from the Data Science and Artificial Intelligence Institute (DATAI) at the University of Navarra has developed a new methodology to improve fairness and reliability in artificial intelligence models used for critical decision-making. These decisions significantly impact individuals’ lives and the operations of organizations, particularly in fields such as health, education, justice and human resources.
The framework, created by Alberto García Galindo, Marcos López De Castro and Rubén Armañanzas Arnedillo, focuses on optimizing machine learning models’ parameters to enhance transparency and ensure confidence in their predictions.
By addressing and reducing inequalities linked to sensitive attributes like race, gender or socioeconomic status, the new AI methodology promises to deliver fairer outcomes without sacrificing accuracy.
“The widespread use of artificial intelligence in sensitive domains has raised ethical concerns due to possible algorithmic discriminations,” Armañanzas Arnedillo, principal researcher at the University of Navarra’s DATAI, said in a news release. “Our approach allows companies and public policy makers to choose models that balance efficiency and fairness according to their needs, responding to emerging regulations. This breakthrough is part of the University of Navarra’s commitment to promote the Philosophy of AI manager, promoting the ethical and transparent use of this technology”.
In their study, published in the renowned journal Machine Learning, the team combined cutting-edge prediction techniques known as conformal prediction with evolutionary learning algorithms inspired by natural processes.
This combination results in algorithms that provide rigorous confidence levels while ensuring equitable treatment across different social and demographic groups.
The methodology was rigorously tested on four benchmark datasets from diverse real-world domains, including economic income, criminal recidivism, hospital readmission and school applications.
The results were promising, showing a significant reduction in biases without compromising predictive accuracy.
“In our analysis we found, for example, striking biases in the prediction of school admissions, showing a significant lack of fairness based on family financial status,” added first author García Galindo, a DATAI predoctoral researcher. “In turn, these experiments demonstrated that, in many cases, our methodology manages to reduce such biases without compromising the predictive ability of the model. In particular, with our model we found solutions in which the discrimination was practically completely reduced, maintaining the accuracy of the predictions.”
This methodology also introduces a ‘Pareto front’ of optimal algorithms, allowing stakeholders to visualize the best available options based on their priorities and better understand the relationship between algorithmic fairness and accuracy.
The researchers believe that the potential impact of this innovation is vast, particularly in sectors where AI must support critical decision-making reliably and ethically.
García Galindo added that their “methodology not only contributes to fairness, but also allows a deeper understanding of how the configuration of the models influences the results, which could guide future research in the regulation of AI algorithms.”
To promote further research and transparency in this evolving field, the researchers have made the code and data from their study publicly available.

