OpenAI on Wednesday announced ChatGPT for Clinicians, a version of ChatGPT designed to support clinical tasks such as documentation and medical research.
The AI company has announced it will make the AI tool available for free to any certified physician, nurse, physician assistant, or pharmacist, starting in the U.S., and OpenAI says it plans to expand access to more countries and groups over time.
According to the company, ChatGPT usage by clinicians has more than doubled in the past year. The American Medical Association reports that the use of artificial intelligence by physicians has more than doubled since 2023, with an AMA survey reporting that 81% currently use the technology in professional settings.
This new AI tool follows OpenAI’s January launch of ChatGPT for Healthcare, a dedicated healthcare product, and ChatGPT Health, a consumer product that helps individuals understand their health information and get personalized answers to their medical questions.
ChatGPT for Healthcare is a workspace for researchers, clinicians, and administrators powered by the physician-tested GPT-5 model. Early adopters already deploying ChatGPT for Healthcare include Boston Children’s Hospital, Cedars-Sinai Medical Center, Stanford Medicine Children’s Health, AdventHealth, HCA Healthcare, Baylor Scott & White Health, Memorial Sloan Kettering Cancer Center, and the University of California, San Francisco, OpenAI announced in January.
OpenAI executives said in a blog post that enabling free access to ChatGPT for clinicians is the next step and builds on the company’s foundation for continuous model evaluation for healthcare.
OpenAI is moving deeper into healthcare as AI startups in the space build and extend AI capabilities across clinical workflows and become AI co-pilots for clinicians. Abridge, a company that started as an AI scribe, is expanding its clinical decision support solutions to provide real-time clinical evidence and guidance to physicians. In contrast, OpenEvidence, an AI-powered medical search engine, has built a clinical AI assistant and medical coding tool to transcribe patient visits.
OpenAI also introduced HealthBench Professional, an open benchmark for evaluating large-scale language models on clinical tasks. The benchmark is structured around three common use cases: care consultation, writing and documentation, and medical research. This new benchmark tool builds on the open source benchmark HealthBench, which OpenAI released last year, and is designed to measure the performance and safety of large-scale language models in healthcare.
According to OpenAI, the new benchmarking tool “measures the performance and safety of typical clinician chats using physician-created conversations and rubrics, multi-level physician judgment, and careful data filtering.”
As a baseline, the company asked human doctors to use unlimited time and web access to create their own answers to tasks in their specialty. Through an evaluation of HealthBench Professional, OpenAI found that GPT‑5.4 in the ChatGPT for Clinicians workspace outperformed base GPT‑5.4, all other OpenAI and external models, and human physicians, the company said.
OpenAI said it worked with “hundreds of physician advisors” to deliver and improve ChatGPT’s functionality for clinicians.
ChatGPT for Clinicians provides free access to OpenAI’s current frontier models for healthcare use cases to help address your questions, research, and documentation. ChatGPT also allows clinicians to transform common workflows into reusable skills, so they can perform the same steps every time for tasks like referrals, pre-authorization, and patient instructions, the company said in a blog post.
The tool also provides access to clinical search tools, allowing access to evidence from “millions of trusted, peer-reviewed medical sources,” OpenAI executives said.
The company says that when a clinician investigates a clinical question with ChatGPT, the review of eligible evidence automatically counts towards continuing medical education credits, without the need for a separate course or additional documentation.
Certain Covered Accounts may also set up a Business Associate Agreement (BAA) with OpenAI to support HIPAA-compliant use when access to protected health information (PHI) is required. OpenAI executives pointed out that many clinical operations do not require PHI.
OpenAI also claims that no conversations are used to train the model and that the tool offers features such as multi-factor authentication to protect the security of sensitive information.
The company claims that its AI models are continuously evaluated for health performance and safety. The company says its physician advisors reviewed more than 700,000 model responses that reflect how clinicians and patients use ChatGPT in the real world.
Before releasing ChatGPT for clinicians, Physician Advisors tested 6,924 conversations in daily work across clinical care, documentation, and research. Doctors rated 99.6% of responses as safe and accurate, according to OpenAI.
“For a subset of 355 examples in which three independent physicians each specified ground truth citations, ChatGPT for Clinicians cited those sources more frequently than human physicians. Still, ChatGPT for Clinicians is designed to support clinicians with information, rather than replace their judgment or expertise,” OpenAI executives said in a blog post.
The company also points to third-party evaluations such as Stanford University’s MedHELM and MedMarks that have ranked OpenAI’s models higher than other LLMs in real-world medical applications.

