Patients’ use of AI chatbots makes sense, but tread carefully
AI chatbots seem to have all the answers and be available at any time, even as the limitations of doctors is their humanity. But that's also their strength.

It finally happened this week.
One of my patients, a woman in her 40s with chronic abdominal pain, told me that she had a new companion named Astrid joining her during an office visit. But she was alone in the exam room. It took me a few awkward seconds to realize that Astrid was a chatbot.
She went on to explain that Astrid helps her remember what to ask me about and alerts her to worrisome causes of her symptoms. She often has difficulty scheduling an office appointment and gets a quicker response to simple questions from Astrid than from our patient portal. I felt an odd combination of humbled, curious, and dismayed. And I was relieved that my patient still showed up for her visit, albeit with Astrid’s advice visible on her iPhone screen.
Many of my patients have long consulted the internet about their symptoms; some even apologize for doing it. I reassure them that Googling is normal these days, and often preempt their fears by asking up front, “Is there anything you researched about your symptoms that has you worried?” But Astrid seemed different, like I was in a brave new world of truly sharing my space in a medical clinic with something (or someone) that I am not sure if I can trust.
And this is only the beginning. Companies like Counsel Health have developed “AI-first” platforms that promise to take the first run at triaging a patient’s medical needs, then escalate cases needing further review to a human clinician. Similarly, Massachusetts General Hospital has launched “Care Connect,” an AI chatbot app for patients without a primary care doctor.
I’ve been reading everything I am able to find about AI chatbots, but I still felt unprepared to face this in my own clinic. I think part of the reason is that doctors generally assume that bedside skills are squarely in our wheelhouse. In fact, it is these abilities — rather than medical diagnostic and therapeutic capabilities — that many of us cite when discussing the most profound moments in our careers. In my specialty of primary care, relational skills — empathy, presence, communication, patient education — are quintessential, almost like what performing an operation is to a surgeon. As much as personal connection is a high priority for patients, it is also a vital source of meaning and purpose for many doctors.
So it stands to reason that the idea of AI chatbots at the bedside provokes a variety of emotions in doctors like me, including disbelief, worry, anger, anxiety, sadness, or outright denial. Many of us prefer to dismiss the idea that they could be our competition for patients’ loyalty. We prefer to discuss ways in which we can define how AI tools streamline processes, improve efficiency, and relieve task overload.
That said, what are patients with a time-sensitive medical concern supposed to do if they are told there are no appointments available for two weeks? People lead busy and complex lives. They may get in quickly at Urgent Care or on a telemedicine service, but will likely receive a transactional visit with a clinician who does not know them, with no continuity if things don’t go as expected.
Even for those fortunate enough to get an appointment at the office for a new concern, these visits can be quite short, and a patient may find that their true concerns compete with your primary care doctor’s agenda to address preventive health, and other issues for which they have a financial incentive through healthcare’s complex payment systems.
And patients see doctors’ human limitations. We get tired and impatient; we are biased; we interrupt; we take cognitive short cuts; it takes time for us to learn. The rates of harmful human medical errors and inaccurate diagnoses is still intolerably high.
So now there is Astrid and her brethren — tireless, always available, prepared to share vast knowledge in seconds, apparently non-judgmental, and even empathic. There are studies that describe how patients often lie to their doctors and are sometimes more at ease being vulnerable and sharing emotionally difficult matters with chatbots.
Generative AI is truly remarkable, and maybe someday chatbots will best doctors at our craft. But before you fully give over to AI temptation, consider a few words of caution.
Technology developers do not uphold any long-held tradition, or take an oath to act in your best interest. They simply aspire to create the most useful and marketable tools possible. Chatbots can “hallucinate,” and provide information that is false or unsubstantiated. They are designed to please you and can be seductively sycophantic. They cannot form long-term, honest, collaborative relationships with you — like committed primary care doctors can — nor can they coordinate the complex, overlapping array of concurrent medical, social, emotional, and financial issues that characterize a journey through illness.
The promise of AI chatbots speaks loudly, and the message is being received with interest and concern. Many physicians and healthcare leaders are replacing our apprehension with curiosity, endeavoring to better understand the allure. It implores us to overcome decades-long inertia and deliver primary care that is accessible, efficient, and prioritizes patients’ stories, needs and concerns above all else. Practicing this version of primary care also stands a better chance of keeping more primary care doctors in the workforce and attracting more new medical graduates to the specialty.
Doctors are working with developers to help make medical AI better, safer, equitable, and ethically sound. Chatbots have far greater potential as doctor-patient partners, rather than as alternatives. My request of my patients: Use them carefully, keep your appointments. And share your learnings — my colleagues and I need to hear what Astrid is recommending.
Jeffrey Millstein is an internist and regional medical director for Penn Primary and Specialty Care.