The American Psychological Association calls for urgent measures to protect adolescents from the potential pitfalls of artificial intelligence, emphasizing the need for safety features, regulations and comprehensive AI literacy education.
Artificial intelligence presents both impressive opportunities and significant risks, especially for adolescents. A new report from the American Psychological Association (APA), titled “Artificial Intelligence and Adolescent Well-Being: An APA Health Advisory,” emphasizes the urgent need for safeguards to protect young users from exploitation, manipulation and the potential devaluation of real-world relationships.
The report asserts, “AI offers new efficiencies and opportunities, yet its deeper integration into daily life requires careful consideration to ensure that AI tools are safe, especially for adolescents.”
The report advocates for proactive measures, cautioning, “We urge all stakeholders to ensure youth safety is considered relatively early in the evolution of AI. It is critical that we do not repeat the same harmful mistakes made with social media.”
Authored by an expert advisory panel, this report follows other APA studies focusing on social media use in adolescence and recommendations for healthy video content.
The study underscores that adolescence — a period spanning ages 10 to 25 — comprises critical brain development stages that necessitate special protective measures for AI interactions.
“Like social media, AI is neither inherently good nor bad,” Mitch Prinstein, chief of pysychology at APA who led the report’s development, said in a news release. “But we have already seen instances where adolescents developed unhealthy and even dangerous ‘relationships’ with chatbots, for example. Some adolescents may not even know they are interacting with AI, which is why it is crucial that developers put guardrails in place now.”
Key Recommendations
The APA report enumerates several key recommendations to ensure the safe use of AI among adolescents:
- Establish Healthy Boundaries: Adolescents are less likely than adults to question the accuracy and intent of information offered by AI, making it critical to set healthy boundaries concerning simulated human relationships.
- Age-Appropriate Defaults: Privacy settings, interaction limits and content must be tailored to age-appropriate levels through transparency, human oversight, support and rigorous testing.
- Promote Healthy Development: While AI can aid in brainstorming and information synthesis, it is essential that students recognize AI’s limitations. The report highlights the potential for AI to help adolescents understand and retain key concepts.
- Limit Harmful Content: Protective measures should be implemented to restrict adolescents’ exposure to harmful or inaccurate content.
- Protect Data Privacy: Ensuring the confidentiality of adolescents’ data, including limiting data use for targeted advertising and sales to third parties.
The report also pushes for comprehensive AI literacy education integrated into core curricula and the development of national and state guidelines for AI literacy.
“Many of these changes can be made immediately, by parents, educators and adolescents themselves,” Prinstein added. “Others will require more substantial changes by developers, policymakers and other technology professionals.”

