{"id":32445,"date":"2025-12-16T16:34:38","date_gmt":"2025-12-16T16:34:38","guid":{"rendered":"https:\/\/www.tun.com\/home\/?p=32445"},"modified":"2025-12-16T16:34:41","modified_gmt":"2025-12-16T16:34:41","slug":"ai-learns-cultural-values-by-watching-people-play-video-games","status":"publish","type":"post","link":"https:\/\/www.tun.com\/home\/ai-learns-cultural-values-by-watching-people-play-video-games\/","title":{"rendered":"AI Learns Cultural Values by Watching People Play Video Games"},"content":{"rendered":"\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-uagb-blockquote uagb-block-e7eb3fc3 uagb-blockquote__skin-border uagb-blockquote__stack-img-none\"><blockquote class=\"uagb-blockquote\"><div class=\"uagb-blockquote__content\">A University of Washington study shows that AI can learn culture-specific values, like altruism, by watching people play a cooperative video game. The work points toward future AI systems that better reflect the communities they serve.<\/div><footer><div class=\"uagb-blockquote__author-wrap uagb-blockquote__author-at-left\"><\/div><\/footer><\/blockquote><\/div>\n\n\n\n<div class=\"wp-block-group is-content-justification-space-between is-nowrap is-layout-flex wp-container-core-group-is-layout-0dfbf163 wp-block-group-is-layout-flex\"><div style=\"font-size:16px;\" class=\"has-text-align-left wp-block-post-author\"><div class=\"wp-block-post-author__content\"><p class=\"wp-block-post-author__name\">The University Network<\/p><\/div><\/div>\n\n\n<div class=\"wp-block-uagb-social-share uagb-social-share__outer-wrap uagb-social-share__layout-horizontal uagb-block-ee584a31\">\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-ec619ce7\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.facebook.com\/sharer.php?u=\" tabindex=\"0\" role=\"button\" aria-label=\"facebook\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M504 256C504 119 393 8 256 8S8 119 8 256c0 123.8 90.69 226.4 209.3 245V327.7h-63V256h63v-54.64c0-62.15 37-96.48 93.67-96.48 27.14 0 55.52 4.84 55.52 4.84v61h-31.28c-30.8 0-40.41 19.12-40.41 38.73V256h68.78l-11 71.69h-57.78V501C413.3 482.4 504 379.8 504 256z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-32d99934\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/twitter.com\/share?url=\" tabindex=\"0\" role=\"button\" aria-label=\"twitter\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M389.2 48h70.6L305.6 224.2 487 464H345L233.7 318.6 106.5 464H35.8L200.7 275.5 26.8 48H172.4L272.9 180.9 389.2 48zM364.4 421.8h39.1L151.1 88h-42L364.4 421.8z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-1d136f14\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.linkedin.com\/shareArticle?url=\" tabindex=\"0\" role=\"button\" aria-label=\"linkedin\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 448 512\"><path d=\"M416 32H31.9C14.3 32 0 46.5 0 64.3v383.4C0 465.5 14.3 480 31.9 480H416c17.6 0 32-14.5 32-32.3V64.3c0-17.8-14.4-32.3-32-32.3zM135.4 416H69V202.2h66.5V416zm-33.2-243c-21.3 0-38.5-17.3-38.5-38.5S80.9 96 102.2 96c21.2 0 38.5 17.3 38.5 38.5 0 21.3-17.2 38.5-38.5 38.5zm282.1 243h-66.4V312c0-24.8-.5-56.7-34.5-56.7-34.6 0-39.9 27-39.9 54.9V416h-66.4V202.2h63.7v29.2h.9c8.9-16.8 30.6-34.5 62.9-34.5 67.2 0 79.7 44.3 79.7 101.9V416z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n<\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n<p>Artificial intelligence may one day learn to \u201cfit in\u201d with different cultures the way children do \u2014 by watching and absorbing how people treat each other.<\/p>\n\n\n\n<p>In a new study <a href=\"https:\/\/journals.plos.org\/plosone\/article?id=10.1371\/journal.pone.0337914\">published<\/a> in PLOS One, University of Washington researchers showed that AI systems can pick up culture-specific values such as altruism simply by observing people play a cooperative video game. The AI then carried those learned values into a completely different situation.<\/p>\n\n\n\n<p>The work addresses a growing concern in AI: most large systems are trained on data scraped from across the internet, which tends to reflect the values of some groups more than others. That can lead to tools and chatbots that feel out of step with many communities.<\/p>\n\n\n\n<p>\u201cWe shouldn\u2019t hard code a universal set of values into AI systems, because many cultures have their own values,\u201d senior author\u00a0Rajesh Rao, a UW professor in the Paul G. Allen School of Computer Science &amp; Engineering and co-director of the Center for Neurotechnology, said n a news release. \u201cSo we wanted to find out if an AI system can learn values the way children do, by observing people in their culture and absorbing their values.\u201d<\/p>\n\n\n\n<p>To explore that idea, the UW team turned to a surprising training ground: a modified version of the popular cooperative cooking game Overcooked.<\/p>\n\n\n\n<p>In the experiment, 190 adults who identified as white and 110 who identified as Latino played a special version of the game. Players had to cook and deliver as much onion soup as possible. On the screen, they could see another kitchen where a second player had to walk farther to complete the same tasks, putting that player at a clear disadvantage.<\/p>\n\n\n\n<p>What the human participants did not know was that the second player was actually a computer-controlled bot. The bot was programmed to ask for help, and the humans had a choice: they could give up some of their onions to help the disadvantaged partner, sacrificing their own score, or keep the onions and focus on maximizing their own success.<\/p>\n\n\n\n<p>Each cultural group \u2014 white and Latino \u2014 was paired with its own AI \u201cagent,\u201d a software system designed to learn from behavior. Instead of being told what to do or given explicit rewards, these agents used a technique called inverse reinforcement learning.<\/p>\n\n\n\n<p>In standard reinforcement learning, an AI is given a goal and rewarded when it gets closer to that goal, like a robot that learns to play tennis by scoring points. In inverse reinforcement learning, the AI watches humans or other agents and tries to infer what goals and values must be driving their actions.<\/p>\n\n\n\n<p>This approach, the researchers argue, is closer to how people, especially children, actually learn.<\/p>\n\n\n\n<p>Co-author Andrew Meltzoff, a UW psychology professor and co-director of the Institute for Learning &amp; Brain Sciences, drew a direct parallel to parenting. <\/p>\n\n\n\n<p>\u201cParents don\u2019t simply train children to do a specific task over and over. Rather, they model or act in the general way they want their children to act. For example, they model sharing and caring towards others,\u201d he said in the news release. \u201cKids learn almost by osmosis how people act in a community or culture. The human values they learn are more \u2018caught\u2019 than \u2018taught.\u2019\u201d<\/p>\n\n\n\n<p>By feeding the Overcooked gameplay data into the AI agents, the team let each system \u201cwatch\u201d how people from its assigned cultural group behaved when faced with a choice between self-interest and helping someone at a disadvantage.<\/p>\n\n\n\n<p>On average, participants in the Latino group chose to help the disadvantaged partner more often than participants in the white group. The AI agents picked up on this pattern. When they later played the game themselves, the agent trained on Latino players\u2019 data gave away more onions than the agent trained on white players\u2019 data, mirroring the more altruistic behavior it had observed.<\/p>\n\n\n\n<p>To see whether the AI had truly learned a general value \u2014 not just a game-specific trick \u2014 the researchers ran a second test. This time, the agents faced a different kind of moral decision: whether to donate some of their money to someone in need.<\/p>\n\n\n\n<p>Again, the agent trained on data from Latino participants in Overcooked behaved more altruistically, choosing to give away more of its resources. That suggested the system had internalized a broader preference for helping others, not just a strategy tied to onions and soup.<\/p>\n\n\n\n<p>The results are an early but promising sign that AI can be tuned to better reflect the values of specific communities by learning from their everyday behavior, according to Rao. <\/p>\n\n\n\n<p>\u201cWe think that our proof-of-concept demonstrations would scale as you increase the amount and variety of culture-specific data you feed to the AI agent. Using such an approach, an AI company could potentially fine-tune their model to learn a specific culture\u2019s values before deploying their AI system in that culture,\u201d Rao added.<\/p>\n\n\n\n<p>The study builds on <a href=\"https:\/\/ilabs.uw.edu\/i-labs-news\/human-infants-can-override-possessive-tendencies-share-valued-items-others\/\">earlier UW research<\/a>\u00a0showing that 19-month-old children raised in Latino and Asian households were more prone to altruism than those from other cultural backgrounds. By echoing that pattern in AI, the new work suggests that machines can, in a limited way, mirror the value-learning process seen in human development.<\/p>\n\n\n\n<p>At the same time, the researchers emphasize that this is just a first step. The experiments involved only two cultural groups, a simplified game environment and a relatively narrow focus on altruism. Real-world settings are far more complex, with many overlapping cultures, conflicting values and high-stakes decisions.<\/p>\n\n\n\n<p>Future research will need to test how inverse reinforcement learning performs when AI systems are exposed to richer, messier data from daily life, and when they must navigate trade-offs between competing values such as fairness, loyalty, privacy and efficiency.<\/p>\n\n\n\n<p>&#8220;Creating culturally attuned AI is an essential question for society,\u201d added Meltzoff. \u201cHow do we create systems that can take the perspectives of others into account and become civic minded?\u201d<\/p>\n\n\n\n<p>The UW team\u2019s answer, at least for now, is to let AI watch us more closely \u2014 not just what we say we value, but how we actually behave when helping others comes at a cost. If that approach scales, tomorrow\u2019s AI might be less about imposing a single global rulebook and more about learning, community by community, what it means to be a good neighbor.<\/p>\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><strong>Source:<\/strong> <a href=\"https:\/\/www.washington.edu\/news\/2025\/12\/11\/ai-training-cultural-values\/\" target=\"_blank\" rel=\"noopener\" title=\"\">University of Washington<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>A University of Washington study shows that AI can learn culture-specific values, like altruism, by watching people play a cooperative video game. The work points toward future AI systems that better reflect the communities they serve.<\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"single-no-separators","format":"standard","meta":{"_acf_changed":false,"_uag_custom_page_level_css":"","_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[8],"tags":[161],"class_list":["post-32445","post","type-post","status-publish","format-standard","hentry","category-ai","tag-university-of-washington"],"acf":[],"aioseo_notices":[],"uagb_featured_image_src":{"full":false,"thumbnail":false,"medium":false,"medium_large":false,"large":false,"1536x1536":false,"2048x2048":false},"uagb_author_info":{"display_name":"The University Network","author_link":"https:\/\/www.tun.com\/home\/author\/funky_junkie\/"},"uagb_comment_info":0,"uagb_excerpt":"A University of Washington study shows that AI can learn culture-specific values, like altruism, by watching people play a cooperative video game. The work points toward future AI systems that better reflect the communities they serve.","_links":{"self":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/32445","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/comments?post=32445"}],"version-history":[{"count":6,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/32445\/revisions"}],"predecessor-version":[{"id":32521,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/32445\/revisions\/32521"}],"wp:attachment":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/media?parent=32445"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/categories?post=32445"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/tags?post=32445"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}