{"id":12535,"date":"2024-12-18T21:38:50","date_gmt":"2024-12-18T21:38:50","guid":{"rendered":"https:\/\/www.tun.com\/home\/?p=12535"},"modified":"2024-12-18T21:38:51","modified_gmt":"2024-12-18T21:38:51","slug":"study-reveals-ai-systems-amplify-human-biases-prompting-feedback-loop","status":"publish","type":"post","link":"https:\/\/www.tun.com\/home\/study-reveals-ai-systems-amplify-human-biases-prompting-feedback-loop\/","title":{"rendered":"Study Reveals AI Systems Amplify Human Biases, Prompting Feedback Loop"},"content":{"rendered":"\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-uagb-blockquote uagb-block-e7eb3fc3 uagb-blockquote__skin-border uagb-blockquote__stack-img-none\"><blockquote class=\"uagb-blockquote\"><div class=\"uagb-blockquote__content\">A new study by UCL researchers reveals that AI systems amplify human biases, creating a feedback loop that can deepen user prejudices. The findings stress the importance of unbiased AI development.<br><\/div><footer><div class=\"uagb-blockquote__author-wrap uagb-blockquote__author-at-left\"><\/div><\/footer><\/blockquote><\/div>\n\n\n\n<div class=\"wp-block-group is-content-justification-space-between is-nowrap is-layout-flex wp-container-core-group-is-layout-0dfbf163 wp-block-group-is-layout-flex\"><div style=\"font-size:16px;\" class=\"has-text-align-left wp-block-post-author\"><div class=\"wp-block-post-author__content\"><p class=\"wp-block-post-author__name\">The University Network<\/p><\/div><\/div>\n\n\n<div class=\"wp-block-uagb-social-share uagb-social-share__outer-wrap uagb-social-share__layout-horizontal uagb-block-ee584a31\">\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-ec619ce7\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.facebook.com\/sharer.php?u=\" tabindex=\"0\" role=\"button\" aria-label=\"facebook\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M504 256C504 119 393 8 256 8S8 119 8 256c0 123.8 90.69 226.4 209.3 245V327.7h-63V256h63v-54.64c0-62.15 37-96.48 93.67-96.48 27.14 0 55.52 4.84 55.52 4.84v61h-31.28c-30.8 0-40.41 19.12-40.41 38.73V256h68.78l-11 71.69h-57.78V501C413.3 482.4 504 379.8 504 256z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-32d99934\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/twitter.com\/share?url=\" tabindex=\"0\" role=\"button\" aria-label=\"twitter\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M389.2 48h70.6L305.6 224.2 487 464H345L233.7 318.6 106.5 464H35.8L200.7 275.5 26.8 48H172.4L272.9 180.9 389.2 48zM364.4 421.8h39.1L151.1 88h-42L364.4 421.8z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-1d136f14\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.linkedin.com\/shareArticle?url=\" tabindex=\"0\" role=\"button\" aria-label=\"linkedin\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 448 512\"><path d=\"M416 32H31.9C14.3 32 0 46.5 0 64.3v383.4C0 465.5 14.3 480 31.9 480H416c17.6 0 32-14.5 32-32.3V64.3c0-17.8-14.4-32.3-32-32.3zM135.4 416H69V202.2h66.5V416zm-33.2-243c-21.3 0-38.5-17.3-38.5-38.5S80.9 96 102.2 96c21.2 0 38.5 17.3 38.5 38.5 0 21.3-17.2 38.5-38.5 38.5zm282.1 243h-66.4V312c0-24.8-.5-56.7-34.5-56.7-34.6 0-39.9 27-39.9 54.9V416h-66.4V202.2h63.7v29.2h.9c8.9-16.8 30.6-34.5 62.9-34.5 67.2 0 79.7 44.3 79.7 101.9V416z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n<\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n<p>Artificial intelligence systems tend to adopt and amplify human biases, leading users to become more prejudiced, according to a new study by researchers at University College London (UCL). This discovery points to a significant feedback loop where minor initial biases can increase human errors, potentially snowballing into more significant issues.<\/p>\n\n\n\n<p>The study, <a href=\"https:\/\/www.nature.com\/articles\/s41562-024-02077-2\" target=\"_blank\" rel=\"noopener\" title=\"\">published<\/a> in Nature Human Behaviour, demonstrates real-world implications, indicating that people interacting with biased AI systems are more likely to underestimate women&#8217;s performance and overestimate the likelihood of white men holding high-status jobs.<\/p>\n\n\n\n<p>&#8220;People are inherently biased, so when we train AI systems on sets of data that have been produced by people, the AI algorithms learn the human biases that are embedded in the data. AI then tends to exploit and amplify these biases to improve its prediction accuracy,&#8221; co-lead author Tali Sharot, a professor of cognitive science at UCL, said in a <a href=\"https:\/\/www.ucl.ac.uk\/news\/2024\/dec\/bias-ai-amplifies-our-own-biases\" target=\"_blank\" rel=\"noopener\" title=\"\">news release<\/a>. &#8220;Here, we\u2019ve found that people interacting with biased AI systems can then become even more biased themselves, creating a potential snowball effect wherein minute biases in original datasets become amplified by the AI, which increases the biases of the person using the AI.\u201d<\/p>\n\n\n\n<p>The researchers conducted experiments involving over 1,200 participants who interacted with various AI systems while completing different tasks. <\/p>\n\n\n\n<p>In one experiment, an AI algorithm trained on a dataset of human responses displayed a bias toward judging faces as sad. Participants who later interacted with this AI system began to show a greater tendency to judge faces as sad, illustrating how the AI&#8217;s learned bias influenced human perceptions.<\/p>\n\n\n\n<p>Similarly, participants assessing performance on tasks were more likely to overestimate men&#8217;s abilities after interacting with a gender-biased AI system. <\/p>\n\n\n\n<p>Another part of the study involved the generative AI system Stable Diffusion, which amplified existing societal biases by overrepresenting white men as financial managers. Participants exposed to images generated by this AI were subsequently more inclined to select white men as likely financial managers.<\/p>\n\n\n\n<p>&#8220;Not only do biased people contribute to biased AIs, but biased AI systems can alter people\u2019s own beliefs so that people using AI tools can end up becoming more biased in domains ranging from social judgements to basic perception,&#8221; co-lead author Moshe Glickman, a research fellow in experimental psychology at UCL, said in the news release. &#8220;Importantly, however, we also found that interacting with accurate AIs can improve people\u2019s judgements, so it\u2019s vital that AI systems are refined to be as unbiased and as accurate as possible.\u201d<\/p>\n\n\n\n<p>Moreover, the study highlights that false beliefs about interacting with another human rather than AI resulted in participants internalizing biases to a lesser extent. This suggests that the perceived accuracy of AI plays a role in how deeply biases are adopted by users.<\/p>\n\n\n\n<p>&#8220;Algorithm developers have a great responsibility in designing AI systems; the influence of AI biases could have profound implications as AI becomes increasingly prevalent in many aspects of our lives,\u201d added Sharot.<\/p>\n\n\n\n<p>The findings emphasize the pressing need for creating AI systems that are as unbiased and accurate as possible to mitigate their influence on human biases and ensure more equitable AI applications.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Artificial intelligence systems tend to adopt and amplify human biases, leading users to become more prejudiced, according to a new study by researchers at University College London (UCL). This discovery points to a significant feedback loop where minor initial biases can increase human errors, potentially snowballing into more significant issues. The study, published in Nature [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"single-no-separators","format":"standard","meta":{"_acf_changed":false,"_uag_custom_page_level_css":"","_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[8],"tags":[],"class_list":["post-12535","post","type-post","status-publish","format-standard","hentry","category-ai"],"acf":[],"aioseo_notices":[],"uagb_featured_image_src":{"full":false,"thumbnail":false,"medium":false,"medium_large":false,"large":false,"1536x1536":false,"2048x2048":false},"uagb_author_info":{"display_name":"The University Network","author_link":"https:\/\/www.tun.com\/home\/author\/funky_junkie\/"},"uagb_comment_info":0,"uagb_excerpt":"Artificial intelligence systems tend to adopt and amplify human biases, leading users to become more prejudiced, according to a new study by researchers at University College London (UCL). This discovery points to a significant feedback loop where minor initial biases can increase human errors, potentially snowballing into more significant issues. The study, published in Nature&hellip;","_links":{"self":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/12535","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/comments?post=12535"}],"version-history":[{"count":6,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/12535\/revisions"}],"predecessor-version":[{"id":12625,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/12535\/revisions\/12625"}],"wp:attachment":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/media?parent=12535"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/categories?post=12535"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/tags?post=12535"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}