Skip to content

How AI models may humanize doctors

Many people, physicians included, fear artificial intelligence will replace them. But using it judiciously may prove it adds to the doctor-patient experience.

Dr. Laurie Margolies demonstrates how breast imaging AI is used to get a second opinion on mammography ultrasounds at Mount Sinai hospital in New York.  “I will tell patients, ‘I looked at it, and the computer looked at it, and we both agree,’” Margolies said. “Hearing me say that we both agree, I think that gives the patient an even greater level of confidence.”
Dr. Laurie Margolies demonstrates how breast imaging AI is used to get a second opinion on mammography ultrasounds at Mount Sinai hospital in New York. “I will tell patients, ‘I looked at it, and the computer looked at it, and we both agree,’” Margolies said. “Hearing me say that we both agree, I think that gives the patient an even greater level of confidence.”Read moreMary Altaffer / AP

Plenty of doctors fear, will AI replace me? AI seems perfectly built for addressing many inefficiencies in medicine. But should you want your doctor to use AI in caring for you?

Recently, I had a patient who might have been exposed to leishmaniasis, an infectious disease rarely seen in the United States.

She had been traveling a few months before in rural Mexico, but I did not know critical details, such as in which parts of Mexico the disease was circulating, which species of it were common in the area she had visited, and which treatments would be most appropriate for any species that could have infected her.

While my patient awaited, ChatGPT collated a comprehensive answer in less than a minute — and even drew up a color-coded map.

Such help is welcome in a profession having a crisis of burnout and moral injury. Medical school involves an overwhelming amount of memorization — anatomy, biochemistry, medicines, you name it.

How are doctors supposed to remember everything? But is AI our savior, or a Trojan Horse that promises to help us but rife with data mining, privacy issues, and quality concerns?

The situation reminds me of an episode of the TV show The Office in which the main character is relying on AI (his GPS) to drive. Michael Scott (Steve Carell) followed his GPS to make a right turn into a lake, despite warnings from his colleague Dwight, who tried to tell him there was no road to follow.

That example epitomizes the dangers of AI: While we often need help, we’re still at risk of overreliance on these tools — following models we don’t understand, when common sense could easily prevent mistakes.

I see my brain as having two main cognitive functions: First, it’s a creative tool; I come up with new ideas, synthesize, and create new things and as a doctor interpret all the data that comes my way. Second, my brain is a storage tool for all my training and years of memorization.

Over time, the mental strain of holding on to all that data limits my brain’s ability to be creative. My brain is like a computer — as my hard drive nearly fills up, the RAM I have for processing speed struggles to fire as quickly. I need more bandwidth.

I hope that AI can off-load that storage burden and allow physicians like me to focus on the tasks that I am best suited for — namely, thinking with creativity and interpreting data, rather than rote tasks.

I can connect with people to find the right solution. An algorithm may be superior at retrieving the names of diseases and which tests I should order, codes I should bill, and the like. My human brain knows when it’s appropriate to ask, “Have you traveled outside of the country recently?,” and work through treatment options with patients.

For instance, if I know there’s a word that starts with the letter A that means an inability to express a thought — but dang it, what is that word? — ChatGPT can tell me in a second that word is aphasia. I knew the concept, which is what matters, but the actual name is something I can store in an external brain like a generative AI to support me.

In fact, AI might give doctors the gift of time, and keep them from administrative burdens, so they actually talk to you as a patient instead of staring at computer screens when they see you, as Dr. Eric Topol, a cardiologist and AI advocate, points out in his book, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again.

My fears center on using AI for tasks we’ve never mastered first — whether for doctors or students using ChatGPT to write essays. I know how to ask ChatGPT or OpenEvidence for a summary of data or diseases because I’m already practicing medicine. I know how to put an AI response into context, and when to ignore advice that doesn’t make sense.

That means not just anyone can play doctor by asking a large language model the same thing. Lacking a sense of direction, Michael Scott in The Office followed an AI’s instruction to drive into a lake. I don’t want us to drown because we outsourced common sense. I hear about coders not learning to code in the first place, since they can ask ChatGPT to do the coding for them. We mustn’t skip steps in the learning process if we are going to leverage AI to make medicine and our world better.

As a patient, like it or not, your doctor is or will soon be using AI. I have seen that it can literally draw a map to guide my medical treatments. I hope the result will be doctors who are more present, more creative, and more human.

Jules Lipoff practices as a board-certified dermatologist for The Dermatology Specialists in Old City, Fishtown, and Roxborough and serves as a clinical associate professor (adjunct) in the Department of Dermatology at the Lewis Katz School of Medicine of Temple University.