Skip to main content
opinion

Wael Haddara is an academic physician, educator and chair of critical care at Western University and London Health Sciences Centre.

In an average week of service in the Intensive Care Unit, I interact with some 30 or more families. For many, these are brief, informational interactions; for some they are emotion-laden, difficult conversations; but for all, the experience of being in the ICU is a stressful one-time event.

Over the years, I have learned that there are some key aspects to interacting with families to help alleviate, rather than contribute to, the trauma of this stressful episode. This is especially the case when the result of that hospital experience is the death of a loved one. Open communication that is clear and honest – without false hope or undue pessimism – is an important starting point. But what most families reflect back to us is the feeling that we cared. Empathy is a cornerstone of any effective therapeutic relationship.

Numerous studies have documented the observation that students enter medical school with high degrees of empathy, but upon graduation, their empathy is significantly diminished. The reasons for this shift are many, but the stresses of training, the possible absence of positive role models, and the overall harshness of the system are some of the reasons cited.

So it begs the question: can a robotic physician powered by artificial intelligence offer more knowledge and empathy than actual physicians when responding to patient questions?

One group of researchers believes it’s possible. Because of privacy concerns, they could not explore questions in a medical setting. Instead, drawing on a Reddit public forum, the researchers compared physician and chatbot responses to patient questions that were asked publicly. The result? ChatGPT-generated answers to questions from the community were judged to be three times as knowledgeable and nine times as empathetic as answers from real physicians.

As with any study, there are several limitations to consider. Firstly, the interactions in question lack a preceding therapeutic relationship, which may affect the level of empathy expressed. Secondly, the electronic nature of the interaction may have led responding physicians to prioritize knowledge and content over empathy. Finally, these assessments were done by a team of medical practitioners, so it is unclear if patients would have rated the response differently.

But the stark difference in results does raise a different existential question for human beings and AI. If we accept that ChatGPT is not sentient, what does that say about “empathy?” Is empathy just the stringing along of a specific sequence of words? Is it injecting emotion-laden language at certain junctures, even when there is no actual emotion behind it? Do intent and investment matter at all? One of George Burns’s funniest quotes was that “sincerity is everything, and if you can fake that, you’ve got it made.” And maybe the point of this particular exercise is that in an era of electronic communication, the only things that do matter are words, and the sequence in which they are used.

But in real life, empathy is more than just words. In my experience with patients and families confronting catastrophic illness, a truly empathetic response may not involve any words at all. It is a moment of silence afforded to a patient who has just received a terrible diagnosis, or a box of tissues quietly extended to a grieving relative, or a voice that conveys the simple message that we care. It is making the time slow down when the world is rushing by and speaking clearly about outcomes and options without false hope or undue pessimism. It is seeing and reflecting the basic dignity of every human being in our interactions and ensuring equity and fairness. All this is not easy. It requires investment, energy and emotional commitment. But it is worth doing because it is the essence of our humanity.

Language-based AI engines distill human communication, and human emotion, down to a specific use of language. The danger is not that the AI engine outperforms physicians’ electronic communication; it is that the medical profession myopically distills empathy and “good” communication into the use of words and their arrangement, neglecting all the other elements, including intent, investment and sincerity.

I’m not a Luddite, and it has become clear that AI engines may have a transactional role to play in many settings, including health care. But there is a gulf between transactional assistance from an AI engine and coming to believe that the narrow world of the current language-based AI chatbots represents the stage of practice to which patients and health care providers should aspire. Rather than an easy path to better communication, findings like this should remind us that if we lose our humanity, we deserve to be replaced by AI.