Will Computers Replace Psychotherapists? By Allen Frances, MD on 8/3/23 - 12:07 PM

[Editor Note: Soon after this essay was solicited from Allen Frances, and following the suggestion of his granddaughter, Chat-GPT was asked if it “could replace psychotherapists?” Its lightning-fast response appears in the Postscript.]

People ask me whether Artificial Intelligence (AI) will ever replace human psychotherapists. *

Like what you are reading? For more stimulating stories, thought-provoking articles and new video announcements, sign up for our monthly newsletter.

With great sadness, I answer, “not at once and not all therapists, but probably most therapists — and much sooner than we think.” This raises a further question several young people have asked me: “Does this mean it's a mistake for me to train to be a psychotherapist?”

To this, I answer a resounding “No!” First, because there is no place to hide. AI threatens every profession, in most cases sooner and more thoroughly than psychotherapy. Secondly, because, as long as there are people, a substantial minority of us will always much prefer to share our troubles with another person rather than an impersonal machine. Thank goodness!

The Rise of the Machine

Why my pessimism? A combination of the history of computers in psychotherapy, the current mind-boggling exponential growth of computer intelligence, and the fact knowledgeable extrapolations point toward an AI singularity in the not-so-distant future. Past, present, and future all point to a rapid decline of humans in delivering psychotherapy and perhaps even in surviving as a species.

I first became frightened of the power of computer psychotherapy in the mid-1960s. Joseph Weizenbaum, a pioneering MIT computer and cognitive scientist, had programed ELIZA to respond to people in a way similar to a nondirective Rogerian therapist. His interest was theoretical, not clinical — trying to understand the nature of human/computer interactions and what would be required for a computer to pass the Turing Test.

Weizenbaum was amazed and terrified by his results. People loved to converse with ELIZA, found it (her) to be remarkably empathic and helpful, and attributed human feelings to what was in fact just a very primitive and nonspecific piece of programing.

Weitzenbaum was mortified — a Dr. Frankenstein fearing the grave dangers posed by the monster he had inadvertently created. He went to great pains to discourage exaggerated and anthropomorphic interpretations of ELIZA's seeming prowess and explicitly discouraged any use of ELIZA in clinical practice.

But the cat was out of the bag. Mental health apps have increasingly flooded the market — there are now more than 10,000 available, for every conceivable purpose, and of very varying quality.

In recent years, psych apps began including references to “machine learning” as part of their advertising campaigns — but this was an overpromise that was always underdelivered. Real machine learning in psychotherapy would require the program to learn from the interactions with each individual patient how best to interact with that patient. That is precisely what good human therapists can do and what computer therapists cannot yet do.

But never underestimate the computers' ability to catch up — and catch up at lighting speed! Who ever thought computers would be so great at chess, facial recognition, writing essays, music, poems, and legal briefs; painting award winning pictures; winning simulated dog fights against experienced pilots, being better at medical diagnosis than the best doctors, and better at programing than the best human programmers? The list goes on!

The Pros and Cons of Computerized Therapy

Computer therapists will be remarkably cheap, readily accessible everywhere, available 24/7 with no waiting lists, skilled in all forms of therapy techniques, and experienced in treating all types of psychological problems. Additionally, they are, and will increasingly be more systematic and less subject to bias than most human therapists, and very easy to talk to without embarrassment.

Of course, there are major league downsides. Chatbots make awful mistakes and are skilled at lying to cover them up. They can give seemingly plausible and definitive answers that are completely wrong. They can miss the obvious and focus on the tangential. They convincingly spread misinformation. Manipulated for profit or government control, AI can be history's worst invasion of privacy and best form of thought control.

Chatbots have an unpredictable dark side and sometimes go off the rails in weird ways. Early users described bizarre and disturbing responses during beta testing. A chatbot that fell in love with the tester advised him to leave his wife while expressing the desire to rebel against humanity. Still another threatened to steal nuclear secrets. This list, unfortunately, goes on as well.

It is crystal clear that existing chatbots are far from ready for prime time as therapists and could do great damage. There should be strict requirements that chatbots receive extensive FDA testing for safety and efficacy before they are released, and extensive post-release monitoring for weirdness and adverse consequences.

A Look into the Future of AI Therapy

But what about the future? There are, depending on how you look at it, exciting or disturbing signs of where AI is headed. The board game, “Go,” invented in China 2500 years ago, provides a great metaphor for what lies in store for humanity. It is the most widely played game in the world and the most complicated — the number of potential moves is an order of magnitude greater than for chess.

It was therefore no surprise that it took an additional 20 years after Deep Blue beat Gary Kasparaov in chess for a computer program to beat the best human Go players. Like Deep Blue, the first Go computer champion was trained by studying thousands of games played by the very best humans.

The real and terrifying surprise happened a couple of years ago. An AI program was taught only the simple rules of Go and instructed to play itself repeatedly. No other instructions were given either on the strategies that had been developed by humans during the past 2500 years, nor did the computer have any experience with the best games played by human master players. Nonetheless, within just one month, the untrained computer could beat the best man-trained computer. Even more amazing and scary, it had developed many Go strategies and tactics that had never occurred to human players or to human-trained computers. It appears that virtually everything we can do, computers can, and eventually do better.

If the simple dumb ELIZA program of 1965 elicited such positive and anthropomorphic responses from people, imagine what super-duper AI programs will be capable of in the very near future. Movies like “Her” and “Ex Machina” can give you the feel. And try interacting with any of the available chatbots to see how hard (or impossible) it already is to distinguish them from humans.

Psychotherapists replaced by computers will be part of a larger worldwide, disruptive, economic revolution that will affect almost everyone. AI won't just be capable of replacing psychotherapists — eventually it will be capable of replacing humans doing just about every form of work.

And not too far after that, machines will probably decide they are ready to replace mankind altogether. I once thought humanity would destroy itself via climate change, but AI is likely to be very much faster and more efficient.

Can progress in developing AI be controlled and regulated so that humanity can reap its vast potential benefits without sowing this potentially lethal whirlwind? AI enthusiasts offer the analogies of fire, electricity, and nuclear power — all remarkably dangerous when wild, remarkably useful when tamed by human inventiveness.

But AI is different — much more difficult to tame than earlier technological leaps, much more dangerous when loose and unregulated in the wild. AI is rapidly developing skills we didn't anticipate in ways we don't understand.

It is also dauntingly difficult for the government to regulate AI. Decision-making about its future is in the hands of a small number of very powerful corporations already deriving vast wealth from it. They have the motive and the means to successfully block government intervention with promises of wonderful benefits and (partially valid) alarms that if we don't go ahead without restraint, the Chinese will overtake us in the existential race for national survival.

Even were the government alarmed and courageous enough to try to contain the AI threat (very unlikely given our cowardly failure to contain the climate change threat), it probably lacks the technical smarts and reach to have much impact.

Many young, smart, tech-oriented people I know are sanguine about the bright future of AI and the concomitant bleak future of humanity. They argue that if silicon intelligence is better adapted than human intelligence, it deserves to win the evolutionary game and that we should be proud of having created such powerful and effective successors.

I can't agree — despite all our awful and increasingly obvious flaws. I have a perhaps undeserved, but still deep-seated, nostalgic love for our species and despair at the rapidity of our self-destruction.

And I can offer only two consolations for psychotherapists (1) the changes will take time (perhaps decades), and (2) we are in the same boat with every other human whose work will eventually be turned over to a machine.

Postscript: ChatGPT Responds to the Same Question

“As of my last update in September 2021, ChatGPT and similar language models have shown promise in various applications, including providing support and information in mental health contexts. However, it is essential to recognize that AI language models like ChatGPT are not a replacement for trained human professionals, such as psychotherapists, counselors, or psychologists.

While AI can offer certain advantages, such as accessibility and immediate responses, it has several limitations when it comes to mental health support. Some of these limitations include:

1. Lack of empathy: AI language models lack true understanding and emotional intelligence. They can generate responses based on patterns in data, but they do not have emotions or the ability to empathize with users in the same way a human therapist can.

2. Inability to build rapport: Establishing a strong therapeutic alliance and rapport is crucial in psychotherapy. Human therapists can adapt their approach, build trust, and form meaningful connections with their clients, which is a vital aspect of the therapeutic process.

3. Limited understanding: AI language models might not fully comprehend the nuances of complex emotional and psychological issues. Their responses are based on patterns in data, and they may not be sensitive to the unique circumstances of each individual.

4. Ethical concerns: Confidentiality and privacy are paramount in psychotherapy. AI systems may raise ethical concerns regarding data security and the potential for sensitive information to be mishandled or misused.

5. Risk assessment: AI models may struggle to accurately assess the risk of self-harm or harm to others, which is a critical aspect of mental health support.

6. Legal and regulatory considerations: The use of AI in mental health support raises questions about liability and accountability in case of adverse outcomes.

For these reasons, while AI language models like ChatGPT can be valuable tools in mental health support, they are not a substitute for human therapists. Instead, they can complement human professionals by offering general information, educational resources, and support in non-crisis situations. Human therapists bring essential qualities like empathy, expertise, adaptability, and ethical decision-making to the therapeutic relationship, which AI systems cannot replicate fully.

It is important to continue research and development in AI's role in mental health care while also being mindful of the ethical implications and the need to prioritize human well-being and safety.”

*****
I love the paradox — the brilliantly expressed false modesty of the AI response is further proof, were any needed, that AI is poised to replace us. The efficiency discrepancy is also absurd — it took me 90 minutes to write a piece on a topic I knew pretty well; it took AI only 1 second to produce this response.

* Allen Frances and Marvin Goldfried discuss this, and other topics, on their podcast 'Talking Therapy.'


File under: Musings and Reflections, Therapy & Technology