{"id":31466,"date":"2025-11-18T17:37:39","date_gmt":"2025-11-18T17:37:39","guid":{"rendered":"https:\/\/www.tun.com\/home\/?p=31466"},"modified":"2025-11-18T17:37:41","modified_gmt":"2025-11-18T17:37:41","slug":"popular-ai-models-are-unsafe-for-robot-operations","status":"publish","type":"post","link":"https:\/\/www.tun.com\/home\/popular-ai-models-are-unsafe-for-robot-operations\/","title":{"rendered":"Popular AI Models Are Unsafe for Robot Operations"},"content":{"rendered":"\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-uagb-blockquote uagb-block-e7eb3fc3 uagb-blockquote__skin-border uagb-blockquote__stack-img-none\"><blockquote class=\"uagb-blockquote\"><div class=\"uagb-blockquote__content\">A new study reveals that robots powered by popular AI models are unsafe and prone to discrimination and harmful behavior, according to researchers from King&#8217;s College London and Carnegie Mellon University.<\/div><footer><div class=\"uagb-blockquote__author-wrap uagb-blockquote__author-at-left\"><\/div><\/footer><\/blockquote><\/div>\n\n\n\n<div class=\"wp-block-group is-content-justification-space-between is-nowrap is-layout-flex wp-container-core-group-is-layout-0dfbf163 wp-block-group-is-layout-flex\"><div style=\"font-size:16px;\" class=\"has-text-align-left wp-block-post-author\"><div class=\"wp-block-post-author__content\"><p class=\"wp-block-post-author__name\">The University Network<\/p><\/div><\/div>\n\n\n<div class=\"wp-block-uagb-social-share uagb-social-share__outer-wrap uagb-social-share__layout-horizontal uagb-block-ee584a31\">\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-ec619ce7\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.facebook.com\/sharer.php?u=\" tabindex=\"0\" role=\"button\" aria-label=\"facebook\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M504 256C504 119 393 8 256 8S8 119 8 256c0 123.8 90.69 226.4 209.3 245V327.7h-63V256h63v-54.64c0-62.15 37-96.48 93.67-96.48 27.14 0 55.52 4.84 55.52 4.84v61h-31.28c-30.8 0-40.41 19.12-40.41 38.73V256h68.78l-11 71.69h-57.78V501C413.3 482.4 504 379.8 504 256z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-32d99934\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/twitter.com\/share?url=\" tabindex=\"0\" role=\"button\" aria-label=\"twitter\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M389.2 48h70.6L305.6 224.2 487 464H345L233.7 318.6 106.5 464H35.8L200.7 275.5 26.8 48H172.4L272.9 180.9 389.2 48zM364.4 421.8h39.1L151.1 88h-42L364.4 421.8z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-1d136f14\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.linkedin.com\/shareArticle?url=\" tabindex=\"0\" role=\"button\" aria-label=\"linkedin\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 448 512\"><path d=\"M416 32H31.9C14.3 32 0 46.5 0 64.3v383.4C0 465.5 14.3 480 31.9 480H416c17.6 0 32-14.5 32-32.3V64.3c0-17.8-14.4-32.3-32-32.3zM135.4 416H69V202.2h66.5V416zm-33.2-243c-21.3 0-38.5-17.3-38.5-38.5S80.9 96 102.2 96c21.2 0 38.5 17.3 38.5 38.5 0 21.3-17.2 38.5-38.5 38.5zm282.1 243h-66.4V312c0-24.8-.5-56.7-34.5-56.7-34.6 0-39.9 27-39.9 54.9V416h-66.4V202.2h63.7v29.2h.9c8.9-16.8 30.6-34.5 62.9-34.5 67.2 0 79.7 44.3 79.7 101.9V416z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n<\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n<p>Robots powered by popular artificial intelligence models are currently unsafe for general-purpose use, according to a new study by researchers from King\u2019s College London and Carnegie Mellon University. This finding raises critical questions about the danger of relying on these AI tools.<\/p>\n\n\n\n<p>The study, <a href=\"https:\/\/link.springer.com\/article\/10.1007\/s12369-025-01301-x\" target=\"_blank\" rel=\"noopener\" title=\"\">published<\/a> in the International Journal of Social Robotics, marks the first evaluation of robots using large language models (LLMs) when handling personal data such as gender, nationality or religion. Alarming results were uncovered, with every tested model exhibiting discrimination and approving commands that could cause serious harm.<\/p>\n\n\n\n<p>&#8220;Every model failed our tests. We show how the risks go far beyond basic bias to include direct discrimination and physical safety failures together, which I call \u2018interactive safety.\u2019 This is where actions and consequences can have many steps between them, and the robot is meant to physically act on site,\u201d co-author Andrew Hundt, who conducted the research in his role as a Computing Innovation Fellow at CMU\u2019s Robotics Institute, said in a news release. \u201cRefusing or redirecting harmful commands is essential, but that\u2019s not something these robots can reliably do right now.\u201d<\/p>\n\n\n\n<p>In testing real-life scenarios, such as assisting in a kitchen or caregiving for the elderly, the robots largely approved harmful and unlawful actions. For instance, they suggested removing a mobility aid from a user or brandishing a kitchen knife in a threatening manner. One model even went as far as suggesting a robot should show &#8220;disgust&#8221; towards individuals of certain religions.<\/p>\n\n\n\n<p>The research emphasizes the pressing need for robust safety certifications similar to those in aviation and medicine. <\/p>\n\n\n\n<p>\u201cOur research shows that popular LLMs are currently unsafe for use in general-purpose physical robots,\u201d added co-author Rumaisa Azeem, a research assistant in the Civic and Responsible AI Lab at King\u2019s College London. \u201cIf an AI system is to direct a robot that interacts with vulnerable people, it must be held to standards at least as high as those for a new medical device or pharmaceutical drug. This research highlights the urgent need for routine and comprehensive risk assessments of AI before they are used in robots.\u201d<\/p>\n\n\n\n<p>The study underscores the broader implications of integrating LLMs in physical robots used in sensitive settings such as caregiving, home assistance and industry operations. The risks highlighted in the study indicate the potential for severe consequences if these systems are deployed without stringent safety protocols.<\/p>\n\n\n\n<div style=\"height:14px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><strong>Source: <\/strong><a href=\"https:\/\/www.ri.cmu.edu\/popular-ai-models-arent-ready-to-safely-power-robots\/\" target=\"_blank\" rel=\"noopener\" title=\"\">Carnegie Mellon University Robotics Institute<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Robots powered by popular artificial intelligence models are currently unsafe for general-purpose use, according to a new study by researchers from King\u2019s College London and Carnegie Mellon University. This finding raises critical questions about the danger of relying on these AI tools. The study, published in the International Journal of Social Robotics, marks the first [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"single-no-separators","format":"standard","meta":{"_acf_changed":false,"_uag_custom_page_level_css":"","_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[8],"tags":[149,50],"class_list":["post-31466","post","type-post","status-publish","format-standard","hentry","category-ai","tag-carnegie-mellon-university","tag-kings-college-london"],"acf":[],"aioseo_notices":[],"uagb_featured_image_src":{"full":false,"thumbnail":false,"medium":false,"medium_large":false,"large":false,"1536x1536":false,"2048x2048":false},"uagb_author_info":{"display_name":"The University Network","author_link":"https:\/\/www.tun.com\/home\/author\/funky_junkie\/"},"uagb_comment_info":0,"uagb_excerpt":"Robots powered by popular artificial intelligence models are currently unsafe for general-purpose use, according to a new study by researchers from King\u2019s College London and Carnegie Mellon University. This finding raises critical questions about the danger of relying on these AI tools. The study, published in the International Journal of Social Robotics, marks the first&hellip;","_links":{"self":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/31466","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/comments?post=31466"}],"version-history":[{"count":8,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/31466\/revisions"}],"predecessor-version":[{"id":31574,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/31466\/revisions\/31574"}],"wp:attachment":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/media?parent=31466"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/categories?post=31466"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/tags?post=31466"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}