JAMA consolidates AI-related articles across its journal family - here. There's also a twice-a-month podcast, open access on podcast managers (Apple podcasts here).
For January 29, 2026, the topic is: Chatting With a Chatbot: The History of the First Clinical Chatbots, Straight From an LLM.
https://edhub.ama-assn.org/jn-learning/audio-player/19034472
The 12- minute interview with ChatGPT 4o tracks AI in medicine back to the 1960s. There's a transcript.
Fun fact: When I was in med school in the 1980s (Stanford), I took a one-hour-a-week elective on AI in medicine taught by Ted Shortliffe.
Here's recent JAMA on over- and under-regulation of AI, link. And, "The death of the consult note," link. JAMA AI is edited by MGH psychiatrist Roy Perlis, heard on the interview above.
###
Chat GPT 5.2 summarizes the transcript.
###
JAMA+ AI Conversations: Chatting With a Chatbot — The Nearly Forgotten Origins of Clinical Chatbots
In a recent 12-minute episode of JAMA+ AI Conversations, Roy Perlis interviews not a historian, but a large language model, to revisit the early—and often oversimplified—history of clinical chatbots. The takeaway is that today’s debates about AI in mental health are not new. They trace directly back to the 1960s, when two pioneers, working in parallel, reached sharply different conclusions about whether computers should ever function as therapists.
ELIZA: The Famous Beginning
Most histories begin with ELIZA, created in the mid-1960s by MIT computer scientist Joseph Weizenbaum. ELIZA used simple pattern matching to simulate a Rogerian psychotherapist. It reflected users’ statements back as questions, creating the illusion of empathy. Despite its technical simplicity, users often experienced it as surprisingly human.
But the podcast argues that the story shouldn’t stop there.
Kenneth Colby: The Overlooked Pioneer
Before building his more famous chatbot, psychiatrist and computer scientist Kenneth Colby had already published a 1966 paper titled A Computer Method of Psychotherapy. In it, he described a scripted therapy program designed to guide structured therapeutic dialogue. Crucially, Colby envisioned these systems running on time-sharing computers—meaning a single machine could support multiple simultaneous “therapy-like” interactions. Even in 1966, he was thinking about scalability.
Colby did not argue that computers should replace therapists. Rather, he framed them as tools—adjunctive supports that might extend access or handle structured components of care.
PARRY: Modeling Psychopathology
In the early 1970s, Colby developed PARRY, designed to simulate a person with paranoid schizophrenia. Unlike ELIZA’s neutral therapeutic stance, PARRY incorporated rule-based representations of suspicion, persecutory beliefs, and guarded conversational patterns.
In informal Turing-style tests, psychiatrists were asked to distinguish PARRY from real patients with paranoid schizophrenia. Some struggled to tell the difference. For the era, this was remarkable.
In one of AI history’s more whimsical moments, ELIZA and PARRY were made to “talk” to each other. The result was absurd: ELIZA calmly reflecting statements, PARRY responding with paranoid suspicion. It was humorous—but also revealing. Even early chatbots could generate the illusion of personality through rules and framing alone.
The Philosophical Divide
The real tension lies not in the code, but in the ethics.
Weizenbaum later became sharply critical of applying computers to psychotherapy. In his book Computer Power and Human Reason, he warned against delegating deeply human emotional work to machines. He worried about misplaced trust, emotional attachment, and moral outsourcing.
Colby, by contrast, saw potential—if used thoughtfully and as augmentation rather than replacement.
Today’s debates about AI therapy apps, large language models in mental health, and scaling behavioral support echo this exact divide. The 2020s are replaying a 1960s argument—with far more powerful tools.
Lessons for Today’s Builders
The podcast closes with advice to psychiatry residents and computer science students who want to build the next generation of health chatbots:
Understand the ethical stakes. Weizenbaum’s cautions about over-reliance and loss of human connection remain relevant.
Think about augmentation, not replacement. Colby’s framing still provides a workable model.
Design for scale responsibly. The original promise—time-sharing therapy at scale—has now become global LLM deployment.
Preserve transparency. Early systems were rule-based and understandable; today’s black-box models demand even greater ethical rigor.
Bottom Line
Clinical chatbots did not begin with generative AI. They began with psychiatrists and computer scientists asking whether machines could structure, simulate, or extend therapeutic interaction. The core questions—empathy, trust, replacement vs. augmentation, scale vs. humanity—were already on the table in 1966.
What has changed is not the argument.
It is the power of the technology.