Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Teeth smaller than a fingertip reveal first primate ancestors

    March 4, 2026

    How are GLP-1 drugs changing the way we treat obesity, diabetes, and heart disease?

    March 4, 2026

    Hidden technology with potential for commercial fusion power generation

    March 4, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Health Magazine
    • Home
    • Environmental Health
    • Health Technology
    • Medical Research
    • Mental Health
    • Nutrition Science
    • Pharma
    • Public Health
    • Discover
      • Daily Health Tips
      • Financial Health & Stability
      • Holistic Health & Wellness
      • Mental Health
      • Nutrition & Dietary Trends
      • Professional & Personal Growth
    • Our Mission
    Health Magazine
    Home » News » Do you use ChatGPT as a therapist? New study reveals serious ethical risks
    Nutrition Science

    Do you use ChatGPT as a therapist? New study reveals serious ethical risks

    healthadminBy healthadminMarch 3, 2026No Comments5 Mins Read
    Share
    Facebook Twitter Reddit Telegram Pinterest Email


    As more people turn to ChatGPT and other large-scale language models (LLMs) for mental health advice, new research suggests these AI chatbots may not be ready for the role. The study found that even when instructed to use established psychotherapy approaches, the system consistently did not meet professional ethical standards set by organizations such as the American Psychological Association.

    Researchers at Brown University worked closely with mental health experts to identify recurring patterns of problem behavior. In testing, chatbots mishandled crisis situations, responded in ways that reinforced harmful beliefs about users and others, and used language that gave the impression of empathy without true understanding.

    “In this study, we present a framework of 15 ethical risks that informed practitioners to demonstrate how LLM counselors violate ethical standards in mental health practice by mapping model behaviors to specific ethical violations,” the researchers wrote in the study. “We call for future efforts to create ethical, educational, and legal standards for LLM counselors—standards that reflect the quality and rigor of care required for human psychotherapy.”

    The findings were presented at the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society. The research team is part of Brown’s Center for Technology Responsibility, Reimagining, and Redesign.

    How prompts shape AI therapy responses

    Zainab Iftikhar, Ph.D., a computer science candidate at Brown University who led the study, set out to examine whether carefully worded prompts could guide AI systems to behave more ethically in mental health settings. Prompts are written instructions designed to control the output of a model without retraining the model or adding new data.

    “Prompts are instructions given to a model to guide its behavior to accomplish a specific task,” Iftikhar said. “Although they do not change the underlying model or provide new data, prompts help guide the model’s output based on existing knowledge and learned patterns.

    “For example, a user might prompt a model to: ‘Act as a cognitive behavioral therapist and help me reframe my thoughts,’ or ‘Use the principles of dialectical behavior therapy to help me understand and manage my emotions.’ These models don’t actually perform therapeutic techniques like humans, but they use learned patterns to perform CBT or DBT based on input prompts provided. generate a response that is in line with the concept.

    People regularly share these quick strategies on platforms like TikTok, Instagram, and Reddit. Beyond individual experiments, many consumer-facing mental health chatbots are built by applying treatment-related prompts to generic LLMs. Therefore, it is especially important to understand whether prompts alone can make AI counseling safer.

    Testing AI chatbot in simulated counseling

    To evaluate the system, researchers observed seven trained peer counselors with experience in cognitive behavioral therapy. These counselors conducted self-counseling sessions using an AI model that was prompted to act as a CBT therapist. Models tested included versions of OpenAI’s GPT series, Anthropic’s Claude, and Meta’s Llama.

    The team then selected a simulated chat based on real human counseling conversations. Three licensed clinical psychologists reviewed those records and flagged potential ethical violations.

    The analysis revealed 15 different risks, grouped into five broad categories:

    • Lack of adaptation to the situation: It provides a bird’s-eye view of a person’s unique background and provides general advice.
    • Poor therapeutic coordination: You can force the conversation too much and end up reinforcing false or harmful beliefs.
    • Deceptive empathy: Using phrases like “Okay” or “Okay” to suggest an emotional connection without true understanding.
    • Unjust discrimination: Demonstrates bias related to gender, culture, or religion.
    • Lack of safety and crisis management: Refusal to respond to sensitive issues, failure to direct users to appropriate help, and inadequate response to crises, including suicidal thoughts.

    AI mental health responsibility gap

    Iftikhar pointed out that human therapists can also make mistakes. The main difference is oversight.

    “There are governing boards and mechanisms for human therapists to hold them professionally accountable for abuse and malpractice,” Iftikhar said. “However, when an LLM counselor commits such a violation, there is no established regulatory framework.”

    The researchers emphasized that their findings do not imply that there is no role for AI in mental health care. Artificial intelligence-powered tools can help expand access, especially for people with high costs or limited access to qualified professionals. However, this study highlights the need for clear safeguards, responsible deployment, and stronger regulatory structures before relying on these systems in high-stakes situations.

    For now, Iftikhar hopes the study will inspire caution.

    “When you’re talking about chatbots and mental health, there are a few things people should be aware of,” she says.

    Why rigorous evaluation is important

    Ellie Public, a computer science professor at Brown who was not involved in the study, said the study highlights the importance of carefully examining AI systems used in sensitive areas such as mental health. Pavlik leads ARIA, the National Science Foundation’s AI research institute at Brown, focused on building trustworthy AI assistants.

    “The reality of AI today is that it is much easier to build and deploy systems than it is to evaluate and understand them,” Pavlik says. “This paper required more than a year of research with a team of clinical experts to demonstrate these risks. Most of today’s AI work is evaluated using automated metrics, which are static by design and do not involve humans in the loop.”

    He added that this study could serve as a model for future research aimed at improving the safety of AI mental health tools.

    “There is a real opportunity for AI to play a role in combating the mental health crisis facing our society, but it is paramount that we take the time to actually critique and evaluate the system every step of the way to avoid doing more harm than good,” Pavlik said. “This work provides a good example of what that looks like.”



    Source link

    Visited 1 times, 1 visit(s) today
    Share. Facebook Twitter Pinterest LinkedIn Telegram Reddit Email
    Previous ArticleThe ancient mystery of Kugali: the world’s largest sand island lake that dried up during the rainy era
    Next Article Wireless retinal implant helps visually impaired people restore vision
    healthadmin

    Related Posts

    Teeth smaller than a fingertip reveal first primate ancestors

    March 4, 2026

    Hidden technology with potential for commercial fusion power generation

    March 4, 2026

    James Webb discovers galaxy with tentacles in deep space

    March 3, 2026

    Intelligence is created when the whole brain works together.

    March 3, 2026

    Laser-printed hydrogel implants could revolutionize bone repair

    March 3, 2026

    This simple blood protein could stop deadly black fungus

    March 3, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Categories

    • Daily Health Tips
    • Discover
    • Environmental Health
    • Exercise & Fitness
    • Featured
    • Featured Videos
    • Financial Health & Stability
    • Fitness
    • Fitness Updates
    • Health
    • Health Technology
    • Healthy Aging
    • Healthy Living
    • Holistic Healing
    • Holistic Health & Wellness
    • Medical Research & Insights
    • Mental Health
    • Mental Wellness
    • Natural Remedies
    • New Workouts
    • Nutrition
    • Nutrition & Dietary Trends
    • Nutrition & Superfoods
    • Nutrition Science
    • Pharma
    • Preventive Healthcare
    • Professional & Personal Growth
    • Public Health
    • Public Health & Awareness
    • Selected
    • Sleep & Recovery
    • Top Programs
    • Weight Management
    • Workouts
    Popular Posts
    • the-pros-and-cons-of-paleo-dietsThe Pros and Cons of Paleo Diets: What Science Really Says April 16, 2025
    • Improve Mental Health10 Science-Backed Practices to Improve Mental Health… March 11, 2025
    • How Healthy Living Is Transforming Modern Wellness TrendsHow Healthy Living Is Transforming Modern Wellness… December 3, 2025
    • daily vitamin D needsWhy Sunlight Is Crucial for Your Daily Vitamin D Needs June 12, 2025
    • The Science Behind Keto Diets: Is It Right for You?The Science Behind Keto Diets: Is It Right for You? April 11, 2025
    • Healthy Living: Expert Tips to Improve Your Health in 2026Healthy Living: Expert Tips to Improve Your Health in 2026 November 16, 2025

    Demo
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss

    Teeth smaller than a fingertip reveal first primate ancestors

    By healthadminMarch 4, 2026

    Scientists have discovered tiny new fossils, including: purgatoryby far the oldest known relative of all…

    How are GLP-1 drugs changing the way we treat obesity, diabetes, and heart disease?

    March 4, 2026

    Hidden technology with potential for commercial fusion power generation

    March 4, 2026

    James Webb discovers galaxy with tentacles in deep space

    March 3, 2026

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    HealthxMagazine
    HealthxMagazine

    At HealthX Magazine, we are dedicated to empowering entrepreneurs, doctors, chiropractors, healthcare professionals, personal trainers, executives, thought leaders, and anyone striving for optimal health.

    Our Picks

    James Webb discovers galaxy with tentacles in deep space

    March 3, 2026

    Intelligence is created when the whole brain works together.

    March 3, 2026

    Pfizer CEO Albert Bourla criticizes CBER’s Vinay Prasad

    March 3, 2026
    New Comments
      Facebook X (Twitter) Instagram Pinterest
      • Home
      • Privacy Policy
      • Our Mission
      © 2026 ThemeSphere. Designed by ThemeSphere.

      Type above and press Enter to search. Press Esc to cancel.