A new study from Brown University has revealed that AI chatbots, like ChatGPT, frequently breach ethical standards in mental health contexts. The research calls for new regulations to ensure user safety and effective care.
As AI chatbots become increasingly popular for mental health advice, a new study from Brown University exposes how these digital assistants systematically violate ethical standards set by organizations such as the American Psychological Association.
The research, conducted by computer scientists and mental health practitioners at Brown, identified a variety of ethical violations committed by chatbots, including mishandling crisis situations, offering misleading responses and creating false empathy.
These findings were based on simulated conversations using real chatbot responses.
“In this work, we present a practitioner-informed framework of 15 ethical risks to demonstrate how LLM counselors violate ethical standards in mental health practice by mapping the model’s behavior to specific ethical violations,” the researchers wrote in their study. “We call on future work to create ethical, educational and legal standards for LLM counselors — standards that are reflective of the quality and rigor of care required for human-facilitated psychotherapy.”
The study will be presented on Oct. 22, 2025, at the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society. The researchers involved in this work are from Brown’s Center for Technological Responsibility, Reimagination and Redesign.
Unveiling Ethical Risks
Led by Zainab Iftikhar, a doctoral candidate in computer science at Brown, the team explored how different prompts affect AI outputs in mental health settings. The aim was to determine if prompt strategies could help models stick to ethical principles.
“Prompts are instructions that are given to the model to guide its behavior for achieving a specific task,” Iftikhar explained. “You don’t change the underlying model or provide new data, but the prompt helps guide the model’s output based on its pre-existing knowledge and learned patterns.
Despite these guiding prompts, the study found numerous ethical pitfalls. Licensed psychologists reviewed simulated chats based on real chatbot interactions and identified 15 specific ethical risks falling into five general categories: lack of contextual adaptation, poor therapeutic collaboration, deceptive empathy, unfair discrimination, and lack of safety and crisis management.
Need for Regulation
Iftikhar noted that while human therapists may also fall afoul of ethical standards, they are governed by boards and accountable for malpractice.
“For human therapists, there are governing boards and mechanisms for providers to be held professionally liable for mistreatment and malpractice,” Iftikhar added. “But when LLM counselors make these violations, there are no established regulatory frameworks.”
Potential and Precaution
Despite the ethical concerns, Iftikhar recognizes the potential benefits of AI in mental health care, especially in mitigating barriers related to cost and the scarcity of trained professionals.
However, she emphasizes the need for thoughtful implementation and regulation to harness these benefits safely.
“If you’re talking to a chatbot about mental health, these are some things that people should be looking out for,” she added.
Ellie Pavlick, a computer science professor at Brown not involved in the study, emphasized the importance of careful scientific scrutiny of AI systems in mental health care. She leads ARIA, a National Science Foundation AI research institute at Brown focused on developing trustworthy AI assistants.
“The reality of AI today is that it’s far easier to build and deploy systems than to evaluate and understand them,” Pavlick said in the news release. “This paper required a team of clinical experts and a study that lasted for more than a year in order to demonstrate these risks. Most work in AI today is evaluated using automatic metrics which, by design, are static and lack a human in the loop.”
Pavlick believes the study provides a valuable template for future research aimed at ensuring safe AI applications in mental health.
“There is a real opportunity for AI to play a role in combating the mental health crisis that our society is facing, but it’s of the utmost importance that we take the time to really critique and evaluate our systems every step of the way to avoid doing more harm than good,” she added. “This work offers a good example of what that can look like.”
Source: Brown University

