Advertisement
News

From chalkboards to chatbots: The emerging landscape of AI in medical education

Deepak R. Pradhan, MD, MHPE, FCCP
Deepak R. Pradhan, MD, MHPE, FCCP

The growing uptake of artificial intelligence (AI) systems in health care is creating many opportunities for integrating such systems into medical education, which may prove timely given the global shortage in health care workers.12 In this context, AI refers primarily to generative large language models (LLMs) used for text-based educational and clinical support tasks rather than to predictive machine learning models or imaging algorithms.

“In medical education, AI tools can be used to summarize clinical case histories, draft learning questions, and support quality control of educational content across formats, including slide decks,” said Deepak R. Pradhan, MD, MHPE, FCCP, Associate Professor of Medicine and Associate Director of the Pulmonary, Critical Care & Sleep Medicine Fellowship Program at NYU Grossman School of Medicine.

AI-enabled summaries can also be installed on smart devices and used by educators during ICU rounds as a “workstation on wheels” that provides clinically relevant information by integrating practice guidelines, he added.

Richard M. Schwartzstein, MD
Richard M. Schwartzstein, MD, FCCP

However, for now, Dr. Pradhan said, human instructors should remain the principal engines and arbiters of high-quality medical education. Richard M. Schwartzstein, MD, FCCP, the Ellen and Melvin Gordon Distinguished Professor of Medicine and Medical Education at Harvard Medical School, agreed. He said that overreliance on AI systems could be problematic for educators.

At the 2025 Millenium Conference, teams from eight medical schools explored the role of AI in medical education.3 One of the themes that emerged is that there is still a need, particularly in pulmonary and critical care medicine, for students and doctors-in-training to learn the foundational concepts of physiology and pathophysiology and to be able to critically evaluate AI-generated information and identify problems, mistakes, or “hallucinations,” Dr. Schwartzstein said.

“While AI models are changing quickly, the basis of the LLMs, at least to date, is primarily pattern recognition, which by itself is not adequate for real-time clinical decision-making and optimal patient care,” he said. “There are also biases in LLMs.” AI can reproduce biases in its training data, and studies suggest these biases can be reinforced when people rely on its outputs.4

Clinical reasoning skills and foundational knowledge therefore are paramount, especially in specialties like pulmonary and critical care medicine with high-acuity patients.

Striking a balance

“There is a growing pipeline of high-quality, peer-reviewed literature that we can leverage using AI in education,” Dr. Pradhan said. “We can employ AI tools to develop multifaceted, learner-centric educational strategies, ranging from interactive case-based narratives to individualized explanations of physiology, as well as guideline- and literature-based summaries and questions that adapt to different learners’ needs.” These applications align with recommendations from the 2025 Millennium Conference, which emphasized using AI as a tool for personalized coaching, tutoring, and reflection, while preserving essential clinical skills.5

However, while AI-enabled drafting for teaching cases offers conveniences, such as generating clinical scenario-specific laboratory values or blood gas measurements, they are not always error-free, Dr. Schwartzstein warned. Instructional content created with AI therefore should be assessed for accuracy, alignment with teaching objectives, and real-world usefulness. The process is iterative, akin to a “dialogue” with an assistant or peer instructor, he said.

Whether clinician educators are considering using AI tools as portable readily accessible knowledge banks or as a “peer” instructor capable of critical reasoning, similar challenges apply. Significant concerns exist around the accuracy, reliability, and attribution of sources, completeness of information, biases, and compounding or propagation of errors.

The precision or perspective of the prompt provided to AI tools can also influence the quality and accuracy of the response.

Dr. Schwartzstein illustrated this with a personal example. He uses medical malpractice cases related to delayed diagnosis or misdiagnosis in his teaching. When he provided an LLM such a case and asked for a diagnosis, AI did not correctly diagnose it. However, when asked “what else could this be?” based on the same case details, the AI offered a correct diagnosis as an option.

Attribution is another concern, such as when using AI-aided methods for assessing trainees’ clinical case notes.

Dr. Schwartzstein, who is leading a study on the performance of AI-assisted grading in classroom courses, said, “While AI tools have been useful in providing an aggregate view of a cohort of learners, they are not as proficient in ascribing a grade to a specific learner’s response, especially for open-ended questions.”

The risk of lost skills

The loss or erosion of core clinical competencies and reasoning skills—or “de-skilling”—is another concern as students adopt AI tools, especially LLMs, more readily, even while acknowledging their limitations.67 Taken further, overreliance on AI in education risks preventing learners from ever developing essential skills, such as note-writing and clinical assessment, where trainees form core habits that should not be replaced by AI shortcuts.5

Trainees must still generate their own problem representation, differential diagnosis, and assessment/plan before consulting AI.45 Without strong foundational reasoning skills, clinicians may lack the judgment needed to recognize when AI outputs are incomplete, miscalibrated, or incorrect.4

Dr. Schwartzstein said that the importance of trainees’ skills in conducting a thorough physical exam and evaluating clinical history has received little attention. These fundamental skills are critical, he said, because they inform the prompts that clinicians make when leveraging AI assistance for a diagnosis. Incomplete histories and superficial examinations produce poorer prompts, narrower differentials, and higher-risk recommendations.

“Physical exam skills, frankly, are deteriorating,” Dr. Schwartzstein said.

Also, in AI-enabled clinical simulations, cases are often presented with complete histories and comprehensive laboratory assessments, unlike in routine practice, where information is often missing. To avoid training in an artificially “clean” world, simulations should deliberately incorporate missing data and should assess how learners revise hypotheses rather than how quickly they converge on an answer.5

Dr. Pradhan said clinicians need firsthand experience creating core academic content—such as writing research proposals, study protocols, and slide decks—without AI before they integrate AI-enabled tools into their work.

Such “effortful learning” can mitigate the risk of de-skilling or never-skilling, he said. This matters because expertise grows through productive struggle—effort that feels slow but builds error-detection skills that LLM convenience can bypass.4

Dr. Schwartzstein added that overreliance on AI can also cloud clinicians’ “professional identity formation”—the complex and transformational process of internalizing core knowledge, skills, values, and beliefs.

Arming clinician educators with practical AI skills

AI tools no longer seem optional, with increasing adoption by clinicians and health care systems in recent years.810 However, there is significant heterogeneity in how clinician educators have approached and implemented it, Dr. Pradhan said.

“Many clinician educators are still unfamiliar with the full range of AI use cases and how best to integrate these tools efficiently and effectively into their educational practice,” he said.

A practical minimum competency set for clinician-educators includes understanding common LLM failure modes (hallucination, bias, noncalibration), verifying claims with primary sources, protecting trainee reasoning workflows, and supervising AI use with explicit documentation of what the tool did and did not contribute.11

In a recent survey of pulmonologists, for instance, while most respondents reported encountering AI in their practice and some familiarity with AI-related terms, less than a quarter (24%) reported feeling comfortable teaching AI-related concepts to their trainees.12

There is growing recognition of the need to educate the educators on best practices for using AI; professional medical organizations have drafted resource guides for medical educators.1314

The DEFT-AI (Diagnosis, Evidence, Feedback, Teaching, AI engagement) framework is another resource that can help prevent overreliance on AI and embed critical thinking into AI-augmented practice and medical education.15

Dr. Schwartzstein is part of a group within the American Thoracic Society that is developing a postgraduate course on AI for medical educators. He noted that the Association of American Medical Colleges is also considering a similar initiative. Dr. Pradhan will serve as faculty for a 2026 Association of Pulmonary and Critical Care Medicine Program Directors workshop on AI for clinician educators.

Newer tools, like OpenEvidence, provide educators an opportunity to review both content and citations/sources and to facilitate content-vetting. Drs. Schwartzstein and Pradhan also pointed out that AI tools can help redraft educational content in formats that reflect preferences of contemporary learners, many of whom favor podcasts over reading. NotebookLM, for instance, can convert a book, a peer-reviewed scientific article, or other text-based information into a podcast.

Health care professionals need defined competencies spanning foundational AI literacy, ethical and equity awareness, critical appraisal of AI outputs, integration into clinical workflows, effective patient interaction, and continuous learning to use AI tools safely and responsibly in practice.16

“Educating the educators on what tools are available is the first step to getting to best practices for the use of AI in medical education. It is a moving target because AI models keep changing [and] improving,” Dr. Schwartzstein said.

Dr. Pradhan concluded, “AI represents an emerging educational skill, and the traditional hierarchical and siloed approach to medical education needs to evolve. Sharing lessons learned, and learning alongside our trainees, will be essential moving forward.”


References

1. Health workforce. World Health Organization. 2026. https://www.who.int/health-topics/health-workforce

2. Angus DC, Khera R, Lieu T, et al. AI, Health, and health care today and tomorrow: The JAMA summit report on artificial intelligence. JAMA. 2025;334(18):1650-1664. doi:10.1001/jama.2025.18490

3. Shapiro Institute. Millennium 2025. BIDMC – Shapiro. 2025. https://www.shapiroinstitute.org/mc2025

4. Furfaro D, Celi LA, Schwartzstein RM. Artificial intelligence in medical education: a long way to go. Chest. 2024;165(4):771-774. doi:10.1016/j.chest.2023.11.028

5. 2025 Millennium Conference: Artificial intelligence and the future of medical education. The Carl J. Shapiro Institute for Education and Research. October 2025. https://www.shapiroinstitute.org/_files/ugd/09ffdc_8dee4c5206b640289b144d01fd121ce3.pdf

6. Tran C, Hryciw BN, Moore SW, Chaput A, Seely AJE. Perceptions and use of generative artificial intelligence in medical students: A multicenter survey. J Med Educ Curric Dev. 2025;12:23821205251391969. doi:10.1177/23821205251391969

7. McCoy L, Ganesan N, Rajagopalan V, McKell D, Niño DF, Swaim MC. A training needs analysis for ai and generative ai in medical education: Perspectives of faculty and students. J Med Educ Curric Dev. 2025;12:23821205251339226. doi:10.1177/23821205251339226

8. Henry TA. 2 in 3 physicians are using health AI—up 78% from 2023. American Medical Association. February 26, 2025. https://www.ama-assn.org/practice-management/digital-health/2-3-physicians-are-using-health-ai-78-2023

9. Poon EG, Lemak CH, Rojas JC, Guptill J, Classen D. Adoption of artificial intelligence in healthcare: survey of health system priorities, successes, and challenges. J Am Med Inform Assoc. 2025;32(7):1093-1100. doi:10.1093/jamia/ocaf065

10. Jain SH. AI Adoption in healthcare is surging: What a new report reveals. Forbes. October 21, 2025. https://www.forbes.com/sites/sachinjain/2025/10/21/ai-adoption-in-healthcare-is-surging-what-a-new-report-reveals/

11. Cao W, Zhang Q, Liu J, Liu S. From agents to governance: essential AI skills for clinicians in the large language model era. J Med Internet Res. 2026;28:e86550. doi:10.2196/86550

12. Mehta V, Avasarala SK. Perceptions of artificial intelligence among pulmonologists. ERJ Open Research. November 13, 2025. doi:10.1183/23120541.01304-2025

13. Lomis KD. The medical educator’s guide to projects leveraging artificial intelligence and learning analytics. American Medical Association. 2025. https://www.ama-assn.org/system/files/ai-educator-guide.pdf

14. AAMC. Artificial intelligence and academic medicine. AAMC. 2026. https://www.aamc.org/about-us/mission-areas/medical-education/artificial-intelligence-and-academic-medicine

15. Abdulnour REE, Gin B, Boscardin CK. Educational strategies for clinical supervision of artificial intelligence use. N Engl J Med. 2025;393(8):786-797. doi:10.1056/NEJMra2503232

16. Russell RG, Lovett Novak L, Patel M, et al. Competencies for the use of artificial intelligence-based tools by health care professionals. Acad Med. 2023;98(3):348-356. doi:10.1097/ACM.0000000000004963

Advertisement