(CNN) โ Is my doctor using artificial intelligence to diagnose me during our appointment? Is he recording our conversation to create an AI summary of our visit?
The use of AI health care is still new enough that many people may not know what to make of it. Most Americans expressed โsignificant discomfortโ about the idea of their doctors using AI to help manage their care, according to a 2023 survey. But AI is not likely to go away. The use of AI applications in medical care is growing, and it is important for patients to understand the uses that could improve care โ and the reasons for continued caution.
I wanted to know how AI is already aiding in diagnosis and helping direct treatment, and what clinicians think about the use of AI. And finally, what are areas of concern, and what is being done to address those?
To guide us with these questions, I spoke with CNN wellness expert Dr. Leana Wen. Wen is an emergency physician, adjunct associate professor at George Washington University and a nonresident senior fellow at the Brookings Institution, where her research includes the intersection of technology, medicine and public health. She previously was Baltimoreโs health commissioner.
CNN: How should patients think about the different uses of AI in health care?
Dr. Leana Wen: First, it helps to know the difference between predictive and generative artificial intelligence, or AI.
Predictive AI uses mathematical models and pattern recognition to predict the future. For example, a predictive AI algorithm can identify which patients with pneumonia are most likely to require hospitalization.
Letโs say youโre the patient. Using past experiences with many other patients with a similar condition โ such as pneumonia, diabetes or heart disease โ an algorithm could come up with a care plan for you based on factors that could impact your illness, such as your age, gender, other medical conditions, laboratory data and racial and ethnic background. The algorithm can help doctors decide, for instance, whether you need to be hospitalized and what treatment is most likely to be effective for your specific set of circumstances.
Generative AI uses large language models to generate humanlike interactions. Many people may be familiar with ChatGPT, which is a form of generative AI that answers user questions in a conversational manner. Generative AI can summarize huge quantities of information in a very short period of time, far surpassing that of any human. Some studies have suggested that generative AI models can โlearnโ so much that they can pass medical licensing exams and that they can generate easy-to-understand, well-written patient instructions on a variety of topics.
There are, though, concerns that these models could โhallucinateโ and come up with responses that are misleading and inaccurate. And with both predictive and generative AI, how well the models work depends on what data they were trained on. When assessing the utility of AI in health care, itโs important to look at each AI tool separately and to understand how it was developed and in what circumstances they are meant to be used.
CNN: How is AI already being used to diagnose patients?
Wen: There are some well-validated examples of predictive AI being used to augment and improve diagnosis.
Take a colonoscopy, which is the gold standard for diagnosing colorectal cancer. During the procedure, the physician passes a long tube through the colon and manually identifies and removes polyps that could be cancerous or precancerous.
AI can be โtrainedโ to identify polyps and then flag them during a colonoscopy. Multiple randomized controlled studies performed at health systems around the world have shown that using AI to augment colonoscopies substantially reduces the miss rate of potentially cancerous lesions.
Similarly, AI is being used to assist with reading mammograms, which is a key screening tool to detect breast cancer. Studies have found that AI-supported mammography screening is at least as accurate as having two trained radiologists read the study and may even improve cancer detection while reducing clinician workload. The US Food and Drug Administration has already authorized about two dozen AI products that help with mammogram cancer screening, though their adoption remains limited due to the potential extra cost involved in deploying them in clinical practice.
CNN: How is AI being used to direct treatment?
Wen: One use case is a predictive AI algorithm developed by researchers at Johns Hopkins University to identify hospitalized patients at high risk of developing sepsis, which is an overwhelming total body infection that could lead to multi-organ failure and death. This early-warning system has been deployed at multiple hospitals and found to reduce the time it takes to detect sepsis and therefore to start antibiotics and other treatment.
Kaiser Permanente has also deployed a predictive AI tool that looks for signs of deterioration in hospitalized patients. If data based on a patientโs vital signs, laboratory tests, nurse reports and other factors points to worsening clinical status, the system issues an alert so that the patient can be rapidly evaluated by an in-person clinician. This tool was associated with a significantly lower mortality rate.
CNN: How can I find out how my providerโs office is using AI for diagnosis and other tasks? What can I do to protect my privacy?
Wen: A lot of what we now refer to as AI has existed for some time. Predictive algorithms have been used for years to help tailor treatment plans, for instance. Doctorsโ offices are using AI more and more to help draft responses to emails and to assist with their documentation.
I think itโs reasonable to assume that your providerโs office is using some form of AI in your care. You could ask your provider and also refer to policies that the office asks you to sign, which may include permissions to use certain technologies.
Your medical records are secured and protected by the Health Insurance Portability and Accountability Act, or HIPAA, which legally covers protection of sensitive health information.
CNN: What do providers think about the use of AI? Does it reduce inefficiency and alleviate the burden of paperwork?
Wen: I think clinicians have been especially glad to have AI applications that reduce documentation and paperwork. For instance, generative AI is being used to assist with medical documentation in a technology called ambient AI. Essentially, the tool โlistensโ to the conversation between doctor and patient and then converts it into a medical note that the doctor can edit. Studies have found that this ambient AI scribe reduces note-taking time, and itโs been viewed positively by physicians and patients alike.
Doctors are also using generative AI to help with prior authorization letters they have to send to insurance companies to get permission for certain medications and treatments. This can help reduce a medical officeโs administrative burden and perhaps even combat health care worker burnout.
CNN: Could AI be used by insurance companies to turn down claims?
Wen: It could be. One could also imagine the presence of AI being a substantial barrier to accessing a human being at an insurance company to discuss claims and other issues.
This is one of many areas of concern in using AI. Others include issues like continuing to ensure privacy and data security as well as the need to independently validate algorithms and transparently share these results. Technologists, clinicians and regulators would do well to consider the many positive uses of AI in health care while also making sure to rigorously study each tool and to deploy them with caution.
The-CNN-Wire
โข & ยฉ 2025 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.