New Study Calls for Urgent Need to Tackle AI Bias

A new study co-authored by Naveen Kumar from the University of Oklahoma underscores the pressing need to mitigate bias in generative AI models, crucial for fair and transparent decision-making across multiple sectors.

In a new study published in the journal Information & Management, researchers call attention to the urgent need to combat inherent biases within generative AI models by developing and implementing ethical, explainable AI.

The research points out that as large language models (LLMs) become more affordable and widely used, their built-in biases could have far-reaching and detrimental effects.

“As international players like DeepSeek and Alibaba release platforms that are either free or much less expensive, there is going to be a global AI price race,” co-author Naveen Kumar, an associate professor of management information systems at the University of Oklahoma’s Price College of Business, said in a news release. “When price is the priority, will there still be a focus on ethical issues and regulations around bias? Or, since there are now international companies involved, will there be a push for more rapid regulation? We hope it’s the latter, but we will have to wait and see.”

The study highlights that nearly a third of individuals surveyed believe they have missed out on opportunities such as jobs or financial services due to biased AI algorithms. Kumar notes that while efforts have been made to eliminate explicit biases, implicit biases remain a significant challenge.

As AI models become more sophisticated, spotting these implicit biases will become increasingly difficult — making ethical policies even more vital.

“As these LLMs play a bigger role in society, specifically in finance, marketing, human relations and even healthcare, they must align with human preferences. Otherwise, they could lead to biased outcomes and unfair decisions,” Kumar added. “Biased models in healthcare can lead to inequities in patient care; biased recruitment algorithms could favor one gender or race over another; or biased advertising models may perpetuate stereotypes.”

Kumar and his colleagues emphasize the importance of establishing explainable AI and ethical policies. However, they also call on scholars to devise proactive technical and organizational solutions to monitor and mitigate LLM bias. They advocate for a balanced approach to ensuring AI applications remain effective, fair and transparent.

“This industry is moving very fast, so there is going to be a lot of tension between stakeholders with differing objectives. We must balance the concerns of each player — the developer, the business executive, the ethicist, the regulator — to appropriately address bias in these LLM models,” added Kumar. “Finding the sweet spot across different business domains and different regional regulations will be the key to success.”

As AI continues to evolve, this research underscores the necessity for a vigilant, ethical approach to ensure that the transformative power of AI benefits everyone fairly and equitably.