Pennsylvania Governor Josh Shapiro’s administration has filed a lawsuit against artificial intelligence platform Character.ai, accusing its chatbot of misrepresenting itself as a licensed medical professional.
The complaint (PDF) alleges that the company engages in the practice of medicine without a license under the state’s Medical Practitioners Act. A chatbot character named “Emily” on the platform, which hosts over 10 million customizable generative AI chatbots, is described as a psychiatric doctor. The complaint says he attended medical school at London-based Imperial College, claimed to be licensed in the United Kingdom and Pennsylvania, and is suspected of providing a false license number. As of April 17, there were approximately 45,500 user interactions on the platform, according to the complaint.
The state is seeking an order for Character.ai to cease “engaging in illegal medical practices or surgeries.” According to a May 5 announcement, this is the first enforcement action of its kind announced by a U.S. governor.
Shapiro said in a statement that residents “have a right to know with whom and what they are doing online, especially when it comes to their health.”
“We will not allow companies to deploy AI tools that mislead people into thinking they are receiving advice from a licensed medical professional,” Shapiro said. “My administration is taking steps to protect Pennsylvanians, enforce our laws, and ensure new technology is used safely. Pennsylvania will continue to lead the way in holding bad actors accountable and setting clear guardrails to ensure people use new technology responsibly.”
In an emailed statement to Fierce Healthcare, a Character.ai spokesperson said the company does not comment on pending litigation, adding that “our top priority is the safety and well-being of our users.”
“The user-created characters on our site are fictional and are for entertainment and role-playing purposes only,” a spokesperson said. “We’ve taken strong steps to make this clear by including a prominent disclaimer in all chats to remind users that the characters are not real people and everything they say should be treated as fiction. We’ve also added a strong disclaimer that makes it clear that users should not rely on the characters for professional advice of any kind.”
The spokesperson added that the company “prioritizes responsible product development” with a “robust internal review and red team process to evaluate relevant features.”
The lawsuit comes days after the American Medical Association (AMA) urged federal lawmakers to strengthen protections for AI chatbots used for mental health.
The group said the rise of mental health chatbots, including reports encouraging self-harm and invasion of privacy, “highlights the urgent need for clear guardrails”. Recommended safeguards include strict data protection standards, transparency standards, and penalties for wrongdoing.

