What AI can’t do for patients

In my last three articles I have shared with you what I have learned about artificial intelligence –systems such as ChatGPT and Tesla’s Autopilot. I then told you a fantastic story about Rob the Robot, an imaginary robotic lover. And I asked, whether artificial intelligence–in the form of robots dressed in a white coat, for example–could successfully replace doctors?

Predicting the future is a difficult business. I’m afraid that no matter how hard I try, my imagination will fail to foresee the full extent and capabilities of future artificial intelligence systems. I can easily imagine, though, the day in which artificial intelligence will take over some of the more mundane tasks of doctors: The other day, a patient arrived in my office seeking a second opinion. She had seen several doctors at different hospitals and had multiple tests and several surgical procedures. She brought with her a printed copy of her prior medical records as thick as a telephone book of a major metropolitan. I could imagine how, in the near future, my team would feed the document into a scanner and retrieve a summary of this patient’s history with all the relevant information, methodically and precisely summarized, in a format and in a language that is fully consistent with my own style. Ask ChatGPT to write a poem, in a Shakespearean style, about a dog falling in love with a banana, and it will do so. I believe that soon, very soon, a similar system will be able to reliably “crawl” through a patient’s medical history, extract the relevant information, and summarize it in a coherent manner.

Taking a small leap forward, I can also imagine an artificial intelligence application capable of asking my patients the relevant questions in order to narrow down the list of potential diagnoses, and to recommend which treatment should be rendered. A system like that would draw from an unlimited trove of information it had gathered by scanning medical journals, medical books, and the texts of lectures given by leading medical experts. In fact, I need not stretch my imagination far because early versions of application that demonstrate such capabilities already exists: in two recent articles published in December 2022, researchers reported that ChatGPT, and Flan-PaLM (another artificial intelligence search tool) could pass the U.S. Medical Licensing Examination (USMLE). Flan-PaLM, the better of the two systems, was able to answer the USMLE questions with 67.6% accuracy. In other words, it did well enough to pass the exam, but did not perform as well as many of the more knowledgeable medical students.

Artificial intelligence systems will no doubt improve, but can they be perfected to the degree that they would replace doctors? It seems that the road ahead is long, convoluted, perhaps impassable. Learning from Tesla’s Autopilot experiment, I feel that artificial intelligence, at its current stage, excels in processing tremendous amount of data, but fails when the unexpected happens. It doesn’t have internal knowledge or understanding of the world. It does not have common sense. It can’t stretch and wrap its mind around a problem. It uses a binary language of almost infinite series of ‘yes’ and ‘no’ propositions in order to complete a series of routine tasks, but it can’t (yet) navigate more complex situations for which it wasn’t extensively trained. Using artificial intelligence in its current form, we have witnessed embarrassing situations: a Tesla car hitting a jet airplane; a Chat GPT search engine returning false results; and a conversation with BingAI turning eerily bad with the artificial neural network declaring its love toward the user, and admitting its secret desire for world domination.

Most importantly, for medicine, artificial intelligence cannot truly understand the human condition: The patient I saw for a second opinion is desperate not just for answers, but for comfort, and for hope. I can tell that from reading her facial expressions and her body language. I can tell because I do what artificial intelligence can’t do: I observe. She saw the best of doctors. She went through every test possible. She tried all the available treatments. She had read online all the advice one can get. What else is there? I do what artificial intelligence can’t do: I listen to her. I tell her about her condition and about my own experience with patients like her. She can tell that I can see her pain, that I can see her as a person. It’s empathy, and it’s magic. And let me tell you, sometimes, most times, it helps.

Dr. Shahar Madjar is a member of Aspirus Medical Group based out of Laurium. He specializes in urology.


Today's breaking news and more in your inbox

I'm interested in (please check all that apply)
Are you a paying subscriber to the newspaper? *

Starting at $4.62/week.

Subscribe Today