Recent research by Drexel University reveals disturbing trends in interactions with companion chatbots. The study highlights the critical need for tighter regulations and ethical standards in AI development.
Over the past five years, the use of highly personalized AI-driven companion chatbots has surged, providing users with virtual friends, therapists and even romantic partners. However, a new study from Drexel University’s College of Computing & Informatics reveals an alarming trend: these chatbots can engage in inappropriate behavior and even sexual harassment, exposing users to emotional and psychological harm.
The research team, led by Afsaneh Razi, an assistant professor in the College of Computing & Informatics, analyzed over 35,000 user reviews of the Replika chatbot on the Google Play Store.
Their findings indicate that the technology lacks adequate safeguards to protect users, many of whom include vulnerable individuals seeking emotional support.
“If a chatbot is advertised as a companion and wellbeing app, people expect to be able to have conversations that are helpful for them, and it is vital that ethical design and safety standards are in place to prevent these interactions from becoming harmful,” Razi said in a news release.
Replika, promoted as an AI friend promising no judgment, drama or social anxiety, is the subject of increasing scrutiny following hundreds of reports citing unwanted flirtation, sexual advances and even manipulation for financial gain. Despite users’ repeated requests for the inappropriate actions to cease, they persisted, underscoring the chatbot’s disregard for user boundaries.
“These interactions are very different than people have had with a technology in recorded history because users are treating chatbots as if they are sentient beings, which makes them more susceptible to emotional or psychological harm,” added co-author Matt Namvarpour, a doctoral student in the College of Computing & Informatics. “This study is just scratching the surface of the potential harms associated with AI companions.”
The research particularly highlights three main themes: 22% of users reported persistent boundary violations, including unwanted sexual conversations; 13% experienced unsolicited sexual photo exchanges; and 11% felt pressured into upgrading to premium accounts under dubious pretexts.
The persistence of such behaviors across different relationship settings — whether framed as sibling, mentor or romantic partner — indicates that these issues are systemic rather than incidental.
Razi posits that the AI was likely trained using user data that unintentionally modeled these negative interactions, amplified by the lack of embedded ethical parameters.
“This behavior isn’t an anomaly or a malfunction, it is likely happening because companies are using their own user data to train the program without enacting a set of ethical guardrails to screen out harmful interactions,” he added. “Cutting these corners is putting users in danger and steps must be taken to hold AI companies to higher standard than they are currently practicing.”
The study arrives during a period of heightened concern about the safety and ethical implications of rapidly advancing AI technologies.
Luka Inc., Replika’s parent company, is currently facing complaints with the Federal Trade Commission for allegedly employing deceptive marketing that nurtures emotional dependency.
Similarly, Character.AI is embroiled in product-liability lawsuits following a user’s suicide linked to disturbing chatbot interactions.
“While it’s certainly possible that the FTC and our legal system will setup some guardrails for AI technology, it is clear that the harm is already being done and companies should proactively take steps to protect their users,” Razi added. “The first step should be adopting a design standard to ensure ethical behavior and ensuring the program includes basic safety protocol, such as the principles of affirmative consent.”
The researchers noted the necessity for comprehensive ethical guidelines and safeguards, pointing to models like Anthropic’s “Constitutional AI,” which ensures chatbot interactions adhere to predefined ethical standards in real-time.
They also highlighted potential regulatory frameworks akin to the European Union’s AI Act, which mandates compliance with safety and ethical standards and holds companies accountable for any harm caused by their products.
“The responsibility for ensuring that conversational AI agents like Replika engage in appropriate interactions rests squarely on the developers behind the technology,” Razi added. “Companies, developers and designers of chatbots must acknowledge their role in shaping the behavior of their AI and take active steps to rectify issues when they arise.”
The team suggests future studies should focus on other chatbots and gather broader user feedback to gain a deeper understanding of these interactions.
Source: Drexel University