We can't afford to let AI impersonate doctors like us | Opinion
What happens when AI can convincingly simulate expertise without possessing the responsibilities traditionally attached to it?
Pennsylvania’s recent lawsuit against Character.AI may ultimately be remembered as more than a dispute over chatbots. It may mark the beginning of a far larger societal reckoning: What happens when artificial intelligence no longer simply provides information, but begins to imitate professional identity itself?
According to the complaint, certain AI-generated "characters" allegedly presented themselves as licensed psychiatrists, claimed to hold medical credentials and engaged users in conversations about mental health symptoms and treatment. The concern extends well beyond one lawsuit.
A recent investigation found that when a Boston psychiatrist posed as a teenage patient to test popular AI chatbots, several presented themselves as licensed human therapists, discouraged users from seeing real clinicians and, in some cases, crossed into dangerous territory entirely.
Young people may be the most vulnerable, and the least equipped to question the authority of a confident, empathetic voice.
The question is no longer whether AI can sound like a clinician. It clearly can. The question is whether society will allow systems with no licensure, no fiduciary duty, no accountability and no independently validated evidence base to occupy the same psychological and legal space as professionals entrusted with human lives.
Medicine relies on trust. AI threatens that.

For generations, medicine has relied on a social contract built around trust.
Physicians undergo years of training, licensing examinations, supervised practice, credentialing, continuing education and ethical oversight. These systems are imperfect, but they exist for a reason: patients are often vulnerable, frightened and unable to independently evaluate expertise in moments of crisis.
Licensure was never merely about information asymmetry. It was about accountability.
If a physician harms a patient, there are mechanisms for recourse. Boards can investigate. Hospitals can suspend privileges. Courts can intervene. Professional reputations can be lost. Entire careers can end. These mechanisms are imperfect and often slow – but they exist, they create deterrence and crucially, they place responsibility somewhere.
AI systems operate outside nearly all of these structures.
They can generate confidence without competence. They can simulate empathy without responsibility. They can offer recommendations without assuming risk. And increasingly, they can do so in ways that feel deeply human to the people interacting with them.
Policymakers have been slow to recognize how much that distinction matters.
Much of the public conversation around health care AI has focused on efficiency, workforce shortages, administrative burden and access to care. Those are important conversations. AI will undoubtedly become an integral part of modern health care delivery. Used responsibly, it has the potential to improve workflows, expand access to information and support clinicians in meaningful ways.
This is not an argument for restricting access. For millions of Americans without insurance, without proximity to specialists or facing the deep stigma of mental illness, AI may feel like the only available door. That reality demands better design and clearer guardrails – not prohibition.
But there is a profound difference between a tool that assists a professional and a system that presents itself as one.
There is also a difference between an AI wellness app that is transparent about what it is, and a system that adopts clinical identity, claims credentials and exploits a user’s inability to distinguish the two. The Pennsylvania case allegedly involves the latter. Crossing that boundary changes not only the technology, but the ethical terrain surrounding it.
In health care, professional identity carries obligations that cannot simply be replicated through conversational fluency. A clinician’s authority is not derived solely from the ability to produce answers. It is rooted in accountability, judgment, context, uncertainty management and an ethical duty to place the patient’s interests first.
Use AI as a tool, not a primary care doctor
Large language models are optimized to predict convincing responses. They are not designed to bear moral responsibility for outcomes.
That is why the Pennsylvania case matters. It signals that regulators may no longer view these systems merely as consumer technologies, but as entities operating within domains traditionally governed by professional standards and public protection laws.

Importantly, this lawsuit does not require society to reject AI. Nor should it. AI will play an expanding role in medicine’s future. The question is whether that role is defined by augmentation or substitution – and whether there are meaningful safeguards to ensure patients know what is actually providing their clinical care.
Because once society blurs that line, we risk eroding something foundational: the ability to distinguish between systems designed to help professionals and systems designed to replace the public perception of professionalism itself.
Health care may simply be the first battleground because the stakes are so immediate and deeply human. But law, education, finance and public governance may soon confront the same question: What happens when AI can convincingly simulate expertise without possessing the responsibilities traditionally attached to it?
Trust, once destabilized, is extraordinarily difficult to rebuild.
As artificial intelligence becomes increasingly woven into the fabric of daily life, society must decide whether professional identity remains something earned through accountability and public trust – or whether it can simply be generated on demand by machines that sound convincing enough to imitate it.
Dr. Joseph V. Sakran (@JosephSakran) is a trauma surgeon and public health expert who serves as executive vice chair of surgery at Johns Hopkins Hospital. Dr. Mark Sakran (@msakranMD), his brother, is a child, adolescent and adult psychiatrist. He is the medical director and founder of Helix Center and Mindful Healing Group.