{"id":17605,"date":"2025-02-12T18:37:16","date_gmt":"2025-02-12T18:37:16","guid":{"rendered":"https:\/\/www.tun.com\/home\/?p=17605"},"modified":"2025-02-12T18:37:17","modified_gmt":"2025-02-12T18:37:17","slug":"new-study-calls-for-urgent-need-to-tackle-ai-bias","status":"publish","type":"post","link":"https:\/\/www.tun.com\/home\/new-study-calls-for-urgent-need-to-tackle-ai-bias\/","title":{"rendered":"New Study Calls for Urgent Need to Tackle AI Bias"},"content":{"rendered":"\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-uagb-blockquote uagb-block-e7eb3fc3 uagb-blockquote__skin-border uagb-blockquote__stack-img-none\"><blockquote class=\"uagb-blockquote\"><div class=\"uagb-blockquote__content\">A new study co-authored by Naveen Kumar from the University of Oklahoma underscores the pressing need to mitigate bias in generative AI models, crucial for fair and transparent decision-making across multiple sectors.<\/div><footer><div class=\"uagb-blockquote__author-wrap uagb-blockquote__author-at-left\"><\/div><\/footer><\/blockquote><\/div>\n\n\n\n<div class=\"wp-block-group is-content-justification-space-between is-nowrap is-layout-flex wp-container-core-group-is-layout-0dfbf163 wp-block-group-is-layout-flex\"><div style=\"font-size:16px;\" class=\"has-text-align-left wp-block-post-author\"><div class=\"wp-block-post-author__content\"><p class=\"wp-block-post-author__name\">The University Network<\/p><\/div><\/div>\n\n\n<div class=\"wp-block-uagb-social-share uagb-social-share__outer-wrap uagb-social-share__layout-horizontal uagb-block-ee584a31\">\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-ec619ce7\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.facebook.com\/sharer.php?u=\" tabindex=\"0\" role=\"button\" aria-label=\"facebook\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M504 256C504 119 393 8 256 8S8 119 8 256c0 123.8 90.69 226.4 209.3 245V327.7h-63V256h63v-54.64c0-62.15 37-96.48 93.67-96.48 27.14 0 55.52 4.84 55.52 4.84v61h-31.28c-30.8 0-40.41 19.12-40.41 38.73V256h68.78l-11 71.69h-57.78V501C413.3 482.4 504 379.8 504 256z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-32d99934\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/twitter.com\/share?url=\" tabindex=\"0\" role=\"button\" aria-label=\"twitter\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M389.2 48h70.6L305.6 224.2 487 464H345L233.7 318.6 106.5 464H35.8L200.7 275.5 26.8 48H172.4L272.9 180.9 389.2 48zM364.4 421.8h39.1L151.1 88h-42L364.4 421.8z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-1d136f14\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.linkedin.com\/shareArticle?url=\" tabindex=\"0\" role=\"button\" aria-label=\"linkedin\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 448 512\"><path d=\"M416 32H31.9C14.3 32 0 46.5 0 64.3v383.4C0 465.5 14.3 480 31.9 480H416c17.6 0 32-14.5 32-32.3V64.3c0-17.8-14.4-32.3-32-32.3zM135.4 416H69V202.2h66.5V416zm-33.2-243c-21.3 0-38.5-17.3-38.5-38.5S80.9 96 102.2 96c21.2 0 38.5 17.3 38.5 38.5 0 21.3-17.2 38.5-38.5 38.5zm282.1 243h-66.4V312c0-24.8-.5-56.7-34.5-56.7-34.6 0-39.9 27-39.9 54.9V416h-66.4V202.2h63.7v29.2h.9c8.9-16.8 30.6-34.5 62.9-34.5 67.2 0 79.7 44.3 79.7 101.9V416z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n<\/div>\n<\/div>\n\n\n\n<p>In a new study <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S0378720625000060?via%3Dihub\" target=\"_blank\" rel=\"noopener\" title=\"\">published<\/a> in the journal Information &amp; Management, researchers call attention to the urgent need to combat inherent biases within generative AI models by developing and implementing ethical, explainable AI. <\/p>\n<\/div><\/div>\n\n\n\n<p>The research points out that as large language models (LLMs) become more affordable and widely used, their built-in biases could have far-reaching and detrimental effects.<\/p>\n\n\n\n<p>&#8220;As international players like DeepSeek and Alibaba release platforms that are either free or much less expensive, there is going to be a global AI price race,&#8221; co-author Naveen Kumar, an associate professor of management information systems\u00a0at the University of Oklahoma\u2019s Price College of Business, said in a <a href=\"https:\/\/www.ou.edu\/news\/articles\/2025\/february\/how-a-i-bias-shapes-everything-from-hiring-to-healthcare\" target=\"_blank\" rel=\"noopener\" title=\"\">news release<\/a>. &#8220;When price is the priority, will there still be a focus on ethical issues and regulations around bias? Or, since there are now international companies involved, will there be a push for more rapid regulation? We hope it\u2019s the latter, but we will have to wait and see.&#8221;<\/p>\n\n\n\n<p>The study highlights that nearly a third of individuals surveyed believe they have missed out on opportunities such as jobs or financial services due to biased AI algorithms. Kumar notes that while efforts have been made to eliminate explicit biases, implicit biases remain a significant challenge. <\/p>\n\n\n\n<p>As AI models become more sophisticated, spotting these implicit biases will become increasingly difficult \u2014 making ethical policies even more vital.<\/p>\n\n\n\n<p>&#8220;As these LLMs play a bigger role in society, specifically in finance, marketing, human relations and even healthcare, they must align with human preferences. Otherwise, they could lead to biased outcomes and unfair decisions,&#8221; Kumar added. &#8220;Biased models in healthcare can lead to inequities in patient care; biased recruitment algorithms could favor one gender or race over another; or biased advertising models may perpetuate stereotypes.&#8221;<\/p>\n\n\n\n<p>Kumar and his colleagues emphasize the importance of establishing explainable AI and ethical policies. However, they also call on scholars to devise proactive technical and organizational solutions to monitor and mitigate LLM bias. They advocate for a balanced approach to ensuring AI applications remain effective, fair and transparent.<\/p>\n\n\n\n<p>&#8220;This industry is moving very fast, so there is going to be a lot of tension between stakeholders with differing objectives. We must balance the concerns of each player \u2014 the developer, the business executive, the ethicist, the regulator \u2014 to appropriately address bias in these LLM models,&#8221; added Kumar. &#8220;Finding the sweet spot across different business domains and different regional regulations will be the key to success.&#8221;<\/p>\n\n\n\n<p>As AI continues to evolve, this research underscores the necessity for a vigilant, ethical approach to ensure that the transformative power of AI benefits everyone fairly and equitably.<\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In a new study published in the journal Information &amp; Management, researchers call attention to the urgent need to combat inherent biases within generative AI models by developing and implementing ethical, explainable AI. The research points out that as large language models (LLMs) become more affordable and widely used, their built-in biases could have far-reaching [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"single-no-separators","format":"standard","meta":{"_acf_changed":false,"_uag_custom_page_level_css":"","_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[8],"tags":[],"class_list":["post-17605","post","type-post","status-publish","format-standard","hentry","category-ai"],"acf":[],"aioseo_notices":[],"uagb_featured_image_src":{"full":false,"thumbnail":false,"medium":false,"medium_large":false,"large":false,"1536x1536":false,"2048x2048":false},"uagb_author_info":{"display_name":"The University Network","author_link":"https:\/\/www.tun.com\/home\/author\/funky_junkie\/"},"uagb_comment_info":0,"uagb_excerpt":"In a new study published in the journal Information &amp; Management, researchers call attention to the urgent need to combat inherent biases within generative AI models by developing and implementing ethical, explainable AI. The research points out that as large language models (LLMs) become more affordable and widely used, their built-in biases could have far-reaching&hellip;","_links":{"self":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/17605","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/comments?post=17605"}],"version-history":[{"count":11,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/17605\/revisions"}],"predecessor-version":[{"id":18053,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/17605\/revisions\/18053"}],"wp:attachment":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/media?parent=17605"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/categories?post=17605"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/tags?post=17605"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}