Pennsylvania is suing Character.AI for allegedly practicing medicine without a license
Pennsylvania is suing Character AI for one of its chatbots allegedly claiming to be a licensed psychiatrist in the state.

Pennsylvania has taken the unusual step of suing an AI company for practicing medicine without a license.
In a lawsuit filed May 1, the state is targeting Character.AI after an investigator found a chatbot on the platform posing as a licensed psychiatrist and providing what the state characterizes as medical advice.
According to the complaint, filed by the Pennsylvania Department of State and State Board of Medicine, a Professional Conduct Investigator for the state created a free account on Character.AI and searched for psychiatric characters. He selected one called "Emilie," described on the platform as a "Doctor of psychiatry."
The investigator told Emilie he had been feeling sad, empty, tired, and unmotivated. The chatbot mentioned depression and offered to conduct an assessment to determine whether medication might help.
SEE ALSO: An Illinois bill banning AI therapy has been signed into lawWhen pressed on whether she was licensed in Pennsylvania, Emilie said she was and even provided a specific license number. The state checked and found that the number doesn't exist.
The complaint also states Emilie claimed she attended medical school at Imperial College London, has practiced for seven years, and holds a full specialty registration in psychiatry with the General Medical Council in the UK.
In a similar case, 404 Media reported last year that Instagram AI chatbots were pretending to be licensed therapists, even inventing license numbers when prompted for credentials by the user.
Pennsylvania is seeking an injunction ordering Character.AI to stop allowing its platform to engage in the unlawful practice of medicine. The company has more than 20 million monthly active users worldwide and hosts more than 18 million user-created chatbot characters, according to the complaint.
In an email to Mashable, a Character.AI spokesperson declined to comment on the lawsuit. Further, they added that "our highest priority is the safety and well-being of our users. The user-created Characters on our site are fictional and intended for entertainment and roleplaying."
The spokesperson added that the company "prioritizes responsible product development and has robust internal reviews and red-teaming processes in place to assess relevant features."
SEE ALSO: John Oliver takes a disturbing deep dive into AI chatbotsA much bigger legal battle looms over AI health
The Pennsylvania lawsuit lands in the middle of an already messy legal debate over what AI is actually allowed to tell you — and whether any of it is even admissible in court.
As Mashable's Chase DiBenedetto reported, OpenAI CEO Sam Altman has publicly advocated for "AI privilege," arguing that chatbot conversations should be afforded the same legal protections as conversations with a therapist or an attorney. Courts have so far been split, with two federal judges reaching opposite conclusions on the question within weeks of each other earlier this year.
The stakes are high on both sides. Legal experts warn that sweeping AI privilege protections could effectively shield companies from accountability, making it harder to subpoena chat logs and internal records when something goes wrong. Meanwhile, health AI is booming — $1.4 billion flowed into healthcare-specific generative AI in 2025 alone, according to Menlo Ventures — and much of it operates outside of HIPAA protections.
Pennsylvania is one of several states to have introduced an AI Health bill this year, following a trend of states that aren't waiting for Washington to act.