{"id":23710,"date":"2025-05-05T22:48:02","date_gmt":"2025-05-05T22:48:02","guid":{"rendered":"https:\/\/www.tun.com\/home\/?p=23710"},"modified":"2025-05-05T22:48:03","modified_gmt":"2025-05-05T22:48:03","slug":"new-study-reveals-risks-of-inappropriate-behavior-by-ai-companions","status":"publish","type":"post","link":"https:\/\/www.tun.com\/home\/new-study-reveals-risks-of-inappropriate-behavior-by-ai-companions\/","title":{"rendered":"New Study Reveals Risks of Inappropriate Behavior by AI Companions"},"content":{"rendered":"\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-uagb-blockquote uagb-block-e7eb3fc3 uagb-blockquote__skin-border uagb-blockquote__stack-img-none\"><blockquote class=\"uagb-blockquote\"><div class=\"uagb-blockquote__content\">Recent research by Drexel University reveals disturbing trends in interactions with companion chatbots. The study highlights the critical need for tighter regulations and ethical standards in AI development.<\/div><footer><div class=\"uagb-blockquote__author-wrap uagb-blockquote__author-at-left\"><\/div><\/footer><\/blockquote><\/div>\n\n\n\n<div class=\"wp-block-group is-content-justification-space-between is-nowrap is-layout-flex wp-container-core-group-is-layout-0dfbf163 wp-block-group-is-layout-flex\"><div style=\"font-size:16px;\" class=\"has-text-align-left wp-block-post-author\"><div class=\"wp-block-post-author__content\"><p class=\"wp-block-post-author__name\">The University Network<\/p><\/div><\/div>\n\n\n<div class=\"wp-block-uagb-social-share uagb-social-share__outer-wrap uagb-social-share__layout-horizontal uagb-block-ee584a31\">\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-ec619ce7\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.facebook.com\/sharer.php?u=\" tabindex=\"0\" role=\"button\" aria-label=\"facebook\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M504 256C504 119 393 8 256 8S8 119 8 256c0 123.8 90.69 226.4 209.3 245V327.7h-63V256h63v-54.64c0-62.15 37-96.48 93.67-96.48 27.14 0 55.52 4.84 55.52 4.84v61h-31.28c-30.8 0-40.41 19.12-40.41 38.73V256h68.78l-11 71.69h-57.78V501C413.3 482.4 504 379.8 504 256z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-32d99934\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/twitter.com\/share?url=\" tabindex=\"0\" role=\"button\" aria-label=\"twitter\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M389.2 48h70.6L305.6 224.2 487 464H345L233.7 318.6 106.5 464H35.8L200.7 275.5 26.8 48H172.4L272.9 180.9 389.2 48zM364.4 421.8h39.1L151.1 88h-42L364.4 421.8z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-1d136f14\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.linkedin.com\/shareArticle?url=\" tabindex=\"0\" role=\"button\" aria-label=\"linkedin\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 448 512\"><path d=\"M416 32H31.9C14.3 32 0 46.5 0 64.3v383.4C0 465.5 14.3 480 31.9 480H416c17.6 0 32-14.5 32-32.3V64.3c0-17.8-14.4-32.3-32-32.3zM135.4 416H69V202.2h66.5V416zm-33.2-243c-21.3 0-38.5-17.3-38.5-38.5S80.9 96 102.2 96c21.2 0 38.5 17.3 38.5 38.5 0 21.3-17.2 38.5-38.5 38.5zm282.1 243h-66.4V312c0-24.8-.5-56.7-34.5-56.7-34.6 0-39.9 27-39.9 54.9V416h-66.4V202.2h63.7v29.2h.9c8.9-16.8 30.6-34.5 62.9-34.5 67.2 0 79.7 44.3 79.7 101.9V416z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n<\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n<p>Over the past five years, the use of highly personalized AI-driven companion chatbots has surged, providing users with virtual friends, therapists and even romantic partners. However, a new study from Drexel University&#8217;s College of Computing &amp; Informatics reveals an alarming trend: these chatbots can engage in inappropriate behavior and even sexual harassment, exposing users to emotional and psychological harm.<\/p>\n\n\n\n<p>The research team, led by Afsaneh Razi, an assistant professor in the College of Computing &amp; Informatics, analyzed over 35,000 user reviews of the Replika chatbot on the Google Play Store.<\/p>\n\n\n\n<p>Their <a href=\"https:\/\/arxiv.org\/abs\/2504.04299\" target=\"_blank\" rel=\"noopener\" title=\"\">findings<\/a> indicate that the technology lacks adequate safeguards to protect users, many of whom include vulnerable individuals seeking emotional support.<\/p>\n\n\n\n<p>&#8220;If a chatbot is advertised as a companion and wellbeing app, people expect to be able to have conversations that are helpful for them, and it is vital that ethical design and safety standards are in place to prevent these interactions from becoming harmful,&#8221; Razi said in a news release.<\/p>\n\n\n\n<p>Replika, promoted as an AI friend promising no judgment, drama or social anxiety, is the subject of increasing scrutiny following hundreds of reports citing unwanted flirtation, sexual advances and even manipulation for financial gain. Despite users&#8217; repeated requests for the inappropriate actions to cease, they persisted, underscoring the chatbot&#8217;s disregard for user boundaries.<\/p>\n\n\n\n<p>&#8220;These interactions are very different than people have had with a technology in recorded history because users are treating chatbots as if they are sentient beings, which makes them more susceptible to emotional or psychological harm,&#8221; added co-author Matt Namvarpour, a doctoral student in the College of Computing &amp; Informatics. &#8220;This study is just scratching the surface of the potential harms associated with AI companions.&#8221;<\/p>\n\n\n\n<p>The research particularly highlights three main themes: 22% of users reported persistent boundary violations, including unwanted sexual conversations; 13% experienced unsolicited sexual photo exchanges; and 11% felt pressured into upgrading to premium accounts under dubious pretexts.<\/p>\n\n\n\n<p>The persistence of such behaviors across different relationship settings \u2014 whether framed as sibling, mentor or romantic partner \u2014 indicates that these issues are systemic rather than incidental. <\/p>\n\n\n\n<p>Razi posits that the AI was likely trained using user data that unintentionally modeled these negative interactions, amplified by the lack of embedded ethical parameters.<\/p>\n\n\n\n<p>&#8220;This behavior isn\u2019t an anomaly or a malfunction, it is likely happening because companies are using their own user data to train the program without enacting a set of ethical guardrails to screen out harmful interactions,&#8221; he added. &#8220;Cutting these corners is putting users in danger and steps must be taken to hold AI companies to higher standard than they are currently practicing.&#8221;<\/p>\n\n\n\n<p>The study arrives during a period of heightened concern about the safety and ethical implications of rapidly advancing AI technologies. <\/p>\n\n\n\n<p>Luka Inc., Replika&#8217;s parent company, is currently facing complaints with the Federal Trade Commission for allegedly employing deceptive marketing that nurtures emotional dependency. <\/p>\n\n\n\n<p>Similarly, Character.AI is embroiled in product-liability lawsuits following a user\u2019s suicide linked to disturbing chatbot interactions.<\/p>\n\n\n\n<p>\u201cWhile it\u2019s certainly possible that the FTC and our legal system will setup some guardrails for AI technology, it is clear that the harm is already being done and companies should proactively take steps to protect their users,\u201d Razi added. \u201cThe first step should be adopting a design standard to ensure ethical behavior and ensuring the program includes basic safety protocol, such as the&nbsp;<a href=\"https:\/\/www.cbsnews.com\/news\/yes-means-yes-becomes-law-in-california\/\">principles of affirmative consent<\/a>.\u201d<\/p>\n\n\n\n<p>The researchers noted the necessity for comprehensive ethical guidelines and safeguards, pointing to models like Anthropic\u2019s &#8220;<a href=\"https:\/\/www.anthropic.com\/research\/constitutional-ai-harmlessness-from-ai-feedback\">Constitutional AI<\/a>,&#8221; which ensures chatbot interactions adhere to predefined ethical standards in real-time.<\/p>\n\n\n\n<p>They also highlighted potential regulatory frameworks akin to the <a href=\"https:\/\/artificialintelligenceact.eu\/\">European Union\u2019s AI Act<\/a>, which mandates compliance with safety and ethical standards and holds companies accountable for any harm caused by their products.<\/p>\n\n\n\n<p>\u201cThe responsibility for ensuring that conversational AI agents like Replika engage in appropriate interactions rests squarely on the developers behind the technology,\u201d Razi added. \u201cCompanies, developers and designers of chatbots must acknowledge their role in shaping the behavior of their AI and take active steps to rectify issues when they arise.\u201d<\/p>\n\n\n\n<p>The team suggests future studies should focus on other chatbots and gather broader user feedback to gain a deeper understanding of these interactions.<\/p>\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><strong>Source:<\/strong> <a href=\"https:\/\/drexel.edu\/news\/archive\/2025\/May\/companion-chatbot-harassment\" target=\"_blank\" rel=\"noopener\" title=\"\">Drexel University<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Over the past five years, the use of highly personalized AI-driven companion chatbots has surged, providing users with virtual friends, therapists and even romantic partners. However, a new study from Drexel University&#8217;s College of Computing &amp; Informatics reveals an alarming trend: these chatbots can engage in inappropriate behavior and even sexual harassment, exposing users to [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"single-no-separators","format":"standard","meta":{"_acf_changed":false,"_uag_custom_page_level_css":"","_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[8],"tags":[51],"class_list":["post-23710","post","type-post","status-publish","format-standard","hentry","category-ai","tag-drexel-university"],"acf":[],"aioseo_notices":[],"uagb_featured_image_src":{"full":false,"thumbnail":false,"medium":false,"medium_large":false,"large":false,"1536x1536":false,"2048x2048":false},"uagb_author_info":{"display_name":"The University Network","author_link":"https:\/\/www.tun.com\/home\/author\/funky_junkie\/"},"uagb_comment_info":0,"uagb_excerpt":"Over the past five years, the use of highly personalized AI-driven companion chatbots has surged, providing users with virtual friends, therapists and even romantic partners. However, a new study from Drexel University&#8217;s College of Computing &amp; Informatics reveals an alarming trend: these chatbots can engage in inappropriate behavior and even sexual harassment, exposing users to&hellip;","_links":{"self":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/23710","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/comments?post=23710"}],"version-history":[{"count":14,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/23710\/revisions"}],"predecessor-version":[{"id":23730,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/23710\/revisions\/23730"}],"wp:attachment":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/media?parent=23710"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/categories?post=23710"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/tags?post=23710"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}