Disclaimer: The views expressed in this article reflect the opinions of the authors and do not necessarily reflect the official policy or position of the US Department of the Navy, US Department of Defense, or the United States government.

Ready or not, artificial intelligence (AI) is here. And it’s already changing the landscape of medicine as we know it.
Though this is far from the first time that technological advancements have caused a monumental shift in society, AI’s black box nature has triggered notable mistrust—especially when considering its potential incorporation into health care. This is, however, reminiscent of the skepticism that computers and the advent of electronic health records (EHRs) received in the 1960s.
AI is a large umbrella term for systems designed to mimic human intelligence. It can be broadly categorized as either predictive or generative in function. To simulate human intelligence, AI deconstructs it into six basic pillars: natural language processing (communication); knowledge representation (understanding); automated reasoning (thinking); machine learning (learning); computer vision (sight); and robotics (movement).1 Machine learning (ML) is best defined as identifying patterns and structures within a dataset. It is the domain in which recent advancements have made AI as a whole gain significant attention.

Truthfully, AI has had direct applications in medicine for decades.1 With the continued advancement of AI and its readily accessible nature with systems like OpenEvidence, there are many promising opportunities to apply AI as a new tool in critical care medicine and in medical education.
Critical care medicine
Critical care medicine has predominantly been defined by reactive processes as opposed to proactive ones. Interventions are generally pursued after clinical deterioration; whereas most of the recent AI advancements in the medical field rely on its predictive function. Research has been conducted on ML models that can predict sepsis, respiratory distress, and other conditions commonly seen in the ICU.2
Among the AI studies currently being conducted in ICU settings, 22.2% predict complications, 20.6% predict mortality, and 18.4% improve prognostic models. Unfortunately, most of these have not progressed to the point of practical application.3 A notable exception is a validated severe sepsis prediction algorithm developed in Hayward, California. An ML algorithm using a patient’s vital signs and age was able to predict sepsis with significantly more accuracy than the Sequential Organ Failure Assessment, the Systemic Inflammatory Response Syndrome, and the Modified Early Warning Score. The ML algorithm significantly decreased both the primary outcome of length of hospitalization and secondary outcome of in-hospital mortality; length of hospitalization decreased by 20%, and in-hospital mortality by 12.4%.4
Impressively, a few AI algorithms have already been integrated into clinical practice after US Food and Drug Administration (FDA) clearance, such as the Analytic for Hemodynamic Instability software.5 Designed for patients receiving continuous telemetry, it can analyze a single lead of an electrocardiogram in tandem with intermittent noninvasive blood pressure monitoring to assess hemodynamic status and thus identify early evidence of hemodynamic instability.
It is important to note that ML is being studied as tools for clinicians to use in tandem with clinical acumen, rather than as a replacement.
AI, like any other instrument, has its limitations. ML algorithms undergo “training” based on existing data. For ICU-based algorithms, this training is often done on large, publicly accessible databases such as the Medical Information Mart for Intensive Care-III, eICU, and AmsterdamUMCdb.2
Systems trained on homogenous datasets can result in biased AI and can negatively affect patient outcomes for underrepresented populations. In one instance, an AI algorithm that was initially developed to predict length of hospital stay was found to identify less affluent zip codes as a variable and consequently predicted longer hospitalizations for patients based on their addresses.5
A large area of interest in AI application within medicine is automating existing health care tasks. AI can potentially automate up to 45% of administrative tasks in health care, allowing physicians to focus more directly on patient care.5 By far, EHR maintenance is often cited as a leading cause of physician burnout, actively limiting patient interactions and sometimes affecting patient care negatively.
While predictive analytics rely on advancements in ML, another subcategory of AI called natural language processing (NLP) has also demonstrated significant advancements in recent years. There are many different AI-powered software programs on the market that have automated clinical notes based on patient interaction transcription.1 While NLP algorithms have predominantly been tested in the outpatient setting, there is the clear extrapolation in using it for inpatient rounding notes, goals of care discussions, and procedure notes.
Currently, most of the technology used in health care focuses on clinical support services as opposed to direct patient application. Of the 1,247 FDA-cleared, AI-enabled medical devices, more than 75% are radiology-based and focus on triaging emergent cases or acting as second readers.6 While AI has transformative potential for critical care medicine, it will likely be a few more years before such technology is commonplace in ICU settings.
Medical education
The way that we think, learn, and interact with educational material has constantly adapted to technological advancements. From transitioning from written text to PDFs, the amount of knowledge we have access to has only continued to grow. In fact, the amount of medical research published annually doubles every five years.7 It is no surprise that with the full-scale buy-in to the advent of AI, the medical educational environment will once again undergo another seismic shift. Whereas most AI applications discussed in the previous clinical section are predictive in nature, there are many ways to apply its generative functions in the realm of medical education.
Large language models (LLMs) are complex algorithms that use a combination of the AI domains to generate humanlike text.8 One LLM that has been making waves in the medical community is the previously referenced OpenEvidence. A free and unlimited tool for health care professionals, OpenEvidence aims to improve medical literature accessibility and synthesis. Content agreements with The New England Journal of Medicine, the Journal of the American Medical Association, and the National Comprehensive Cancer Network allow the model to provide users with evidence-based summaries, links to articles, and up-to-date information on clinical guidelines, diagnostic criteria, and management approaches.9–10
OpenEvidence can be a valued resource for medical students and resident physicians alike for study preparation. Its ability to direct nearly any peer-reviewed resource to a user’s phone is unparalleled. This ready access to clinically relevant content helps directly integrate learning with clinical practice.
Notably, within two years of its launch, OpenEvidence became the first AI platform to score 100% on all three Steps of the United States Medical Licensing Exam.8
However, overreliance on OpenEvidence can diminish critical thinking skills necessary for independent practice.
While commending its uses, it is also important to note OpenEvidence’s potential pitfalls. Though all LLMs encode billions of parameters, none are unlimited in knowledge. When LLMs are asked questions for which it has no knowledge, many produce replies that are not based on reality. This occurrence is known as an “AI hallucination” and remains a mathematical inevitability based on the training process that rewards conjecture as opposed to honest uncertainty.9 This, consequently, can cause mislearning. For this reason, like all other tools, OpenEvidence is best used as an adjunct rather than as a substitute for critical thinking and clinical acumen.
From an educator’s perspective, OpenEvidence and other large LLMs have notable benefits that are hard to ignore. With its ability to craft curated objectives and challenging multiple-choice questions based on blueprints and medical topics, it can augment curriculum-building. Like all other tools, all questions and answers should be reviewed before being implemented as part of a course. OpenEvidence also has “Trending” and “New evidence” tabs within its “Feed” section that offer trending and cutting-edge research. These can be filtered by medical specialty for a more curated selection of journal articles.10
As AI continues to evolve, future high-yield applications will include personalized medical instruction to better suit learners of all levels. Another area in critical care medicine education where AI will prove invaluable is simulation.9 Already a cornerstone of medical student and resident education alike, simulation allows for clinical practice in a safe environment. High-acuity ICU scenarios are often difficult to simulate realistically, and LLM-based simulations could generate dynamic, patient-specific cases that better mirror real-world complexity.
LLMs have the potential to completely alter how medical education is pursued by learners and educators alike. But while there are many benefits to having a tool to consolidate knowledge and research articles, LLMs have limitations and should be used as supplements instead of substitutions for primary literature review and clinical expertise.
Adapting to change
AI’s ubiquity will only continue to grow. As clinicians, it is important to welcome this change to the medical landscape early on and to incorporate it into our practices and teachings as we see fit, so as not to be left behind by the wave of technology.
In the same vein, it is important to ensure that our learners are aware of the role AI should have in their respective medical educations and to promote its use responsibly.
For the practicing intensivist, most of AI’s presence has yet to be felt. However, when AI inevitably integrates into critical care practice, we look forward to having another tool in our toolbelts.
This article was originally published in the Spring 2026 issue of CHEST Physician.
References
1. Zhang A, Wu Z, Wu E, et al. Leveraging physiology and artificial intelligence to deliver advancements in health care. Physiol Rev. 2023;103(4):1675-1703. doi:10.1152/physrev.00033.2022
2. Biesheuvel LA, Dongelmans DA, Elbers PWG. Artificial intelligence to advance acute and intensive care medicine. Curr Opin Crit Care. 2024;30(3):218-224. doi:10.1097/MCC.0000000000000981
3. Fleuren LM, Thoral P, Shillan D, Ercole A, Elbers PWG. Machine learning in intensive care medicine: ready for take-off? Intensive Care Med. 2020;46(11):2067-2070. doi:10.1007/s00134-020-06045-y
4. Shimabukuro DW, Barton CW, Feldman MD, Mataraso SJ, Das R. Effect of a machine learning-based severe sepsis prediction algorithm on patient survival and hospital length of stay: a randomised clinical trial. BMJ Open Respir Res. 2017;4(1):e000234. doi:10.1136/bmjresp-2017-000234
5. Woite NL, Gameiro RR, Leite M, et al. Understanding artificial intelligence in critical care: opportunities, risks, and practical applications. Crit Care Sci. 2025;37(1):1-10. doi:10.1234/criticalcare.2025.000234
6. US Food and Drug Administration. Artificial intelligence-enabled medical devices. Published 2025. https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-enabled-medical-devices
7. National Library of Medicine. PubMed total records by publication year. Published 2025. https://datadiscovery.nlm.nih.gov/Information-Management/PubMed-total-records-by-publication-year/eds5-ig9r/about_data
8. OpenEvidence. OpenEvidence is the leading medical information platform. Published 2025. https://www.openevidence.com/about
9. Parente DJ. Generative artificial intelligence and large language models in primary care medical education. Fam Med. 2024;56(9):697-703. doi:10.22454/fammed.2024.0329
10. Patel N, Grewal H, Buddhavarapu V, Dhillon G. OpenEvidence: enhancing medical student clinical rotations with AI, but with limitations. Cureus. 2025;17(1):e321677. doi:10.7759/cureus.321677
