Promise, Pitfalls, and Protection: Predictive AI in Healthcare
Key Highlights
- Predictive AI is increasingly used in healthcare for ICU sepsis, oncology, cardiology, diabetes, and COVID-19, with new tools emerging rapidly.
- Adoption rates among physicians have surged from 38% to 66%, but integration into workflows remains a significant challenge.
- Most predictive models focus on infections like C. difficile and surgical site infections, which have clearer data for modeling.
- Data interoperability and workflow variation across hospitals are major barriers to scaling predictive analytics in infection prevention.
- While AI can streamline chart review and data visualization, clinical interpretation and decision-making still require human judgment.
Predictive artificial intelligence (AI) and machine learning (ML) are increasingly being used in healthcare settings across the care spectrum. A systematic review of studies examining these systems in healthcare serves as a kind of quantification. A brief summary of how often predictive machine learning, for instance, has been applied in “various healthcare domains such as ICU (sepsis, mortality), oncology, cardiology, diabetes management, and COVID-19.”1
New tools are seemingly being introduced or developed on a daily basis. We’ve written about a number of them on our website; in March 2026 alone, for instance, a study funded by the National Institutes of Health included the development of a machine learning model to assess CT scans, successfully categorizing scans between two codes over 81% of the time. It was also able to identify which patients were at higher risk of developing chronic diseases five years in advance around 75% of the time.2
Additionally, physicians are using artificial intelligence more frequently, according to survey results from the American Medical Association (AMA). Adoption rates of digital health and AI “soared from 38% to 66% over the last year,” and a majority of physicians see an advantage to using AI in their practices.3 The question becomes how transparency is maintained throughout the development process so that physicians feel comfortable using the tools at their disposal. A number of frameworks have been published by various organizations and in various journals to that effect, attempting to establish footing in an ever-shifting and still-forming industry as the technology advances rapidly.
There are serious questions surrounding how much of a miracle cure this technology actually represents. The temptation to integrate it everywhere with the hope that it will seamlessly reduce physician burnout to misdiagnoses is a strong one, but the reality of where the various projects are needs to be taken into account.
To cut through the noise on where things stand today and where they could end up, we spoke to Kelly Holmes, MS, AL-CIP, FAPIC, Senior Associate, Infection Prevention & Management Associates, Inc. (IP&MA).
Setting the Stage
Where is predictive surveillance making the biggest measurable impact in infection prevention today?
Holmes: When people talk about AI in infection prevention, they’re usually referring to three areas: predictive models that estimate risk, tools that help detect infections earlier, and decision-support tools that help infection preventionists review charts or apply surveillance definitions. Most predictive models for HAIs are still being studied or piloted rather than used routinely. What we’re seeing right now is more interest in surveillance platforms that help infection preventionists identify potential cases by pulling together structured surveillance data from the medical record.
What specific infection types or risk scenarios are most “AI-ready” right now?
Holmes: In the research literature, infections like C. difficile, surgical site infections, and CLABSIs show up frequently in modeling studies. Those tend to work better because there are clearer risk factors and structured data available in the electronic health record. Infections that rely heavily on interpreting clinical correlation or narrative documentation are much harder to model.
What’s the biggest misconception healthcare leaders have about AI in infection prevention?
Holmes: One of the biggest misconceptions is that predictive surveillance for infections is already widely implemented in hospitals. In reality, most infection prevention programs still rely heavily on retrospective surveillance and manual chart review. The technology is exciting and evolving quickly, but integrating it into real infection prevention workflows is still very much in progress. Developing predictive models is only the first step. The bigger challenge is integrating those models into infection prevention workflows in a way that clinicians trust and can actually utilize.
Identifying At-Risk Patients Earlier
How early can today’s models realistically flag risk compared to traditional surveillance?
Holmes: Across many studies, you tend to see similar variables come up, like device exposure, antibiotic use, severity of illness, comorbidities, and length of stay. Laboratory results and microbiology data also play an important role. That said, the predictive strength of these factors can vary a lot across hospitals, which is why local validation is so important.
What’s the difference between a clinically useful alert and a statistically impressive one?
Holmes: A model can perform very well statistically and still not be useful in practice. If an alert fires too often or doesn’t lead to a clear action, clinicians will start ignoring it. A clinically useful alert has to fit into the workflow and provide information that helps someone make a decision.
Integrating Lab, EHR, and Environmental Data
What are the biggest integration barriers between lab systems, EHR data, and environmental inputs?
Holmes: One of the biggest challenges is simply bringing all the relevant data together. Infection prevention surveillance often requires pulling information from multiple places in the electronic health record, including (but not limited to) microbiology, imaging, device documentation, procedure codes, and patient movement data. Some of that information is structured, like lab results or device days, but a lot of it is buried in clinical notes that infection preventionists have to review manually. Even within the same EHR platform, workflows and modules can vary across hospital systems, which means tools developed in one environment may not translate easily to another. Data interoperability and workflow variation remain major barriers to implementing predictive tools at scale.
How mature are health systems in pulling these data streams together?
Holmes: Some large academic medical centers with strong informatics teams are starting to pilot these types of systems. But many hospitals, especially smaller or rural facilities, don’t yet have the infrastructure or data science resources to support advanced predictive analytics. That creates a real concern about an equity gap as these technologies evolve.
How do you ensure data integrity before feeding it into predictive models?
Holmes: Before feeding data into predictive models, organizations first need secure data environments and governance processes to protect patient information. After that, the focus shifts to data quality, making sure variables are consistently defined and documented across systems. Infection prevention data can be particularly complex because surveillance often combines structured data with information buried in clinical notes. That’s why clinical validation and ongoing monitoring are essential before models are used operationally.
Reducing Chart Review
How much time are modern AI systems saving IP teams on chart review?
Holmes: There’s a lot of interest in using AI to reduce the amount of manual chart review required for surveillance, but the evidence on time savings is still limited. Many health systems now have surveillance dashboards that organize information from multiple parts of the medical record, such as microbiology results, device documentation, vitals, and flowsheet elements, which can help infection preventionists identify potential cases more quickly.
These tools can save time by making the relevant information easier to find, but they’re not perfect. Algorithms may miss some cases because of the nuances of surveillance definitions, which is why many programs still validate cultures or other key data sources as part of their workflow. Some organizations are also beginning to experiment with enterprise AI tools that help infection preventionists summarize data or generate visualizations for reports and meetings. Tools like that can ultimately free up time for IPs to spend more time on the units working directly with clinical teams.
What tasks still require human review?
Holmes: Infection prevention surveillance often requires interpreting clinical documentation and determining whether cases meet specific reporting definitions. Signs and symptoms may appear in different parts of the record, and IPs need to review the clinical context to determine whether criteria are met. That kind of judgment is difficult to fully automate, so human review remains a critical part of the process.
Does automation improve surveillance accuracy, or just efficiency?
Holmes: Most current systems automate data gathering rather than the surveillance decision itself. Automation can definitely improve efficiency by helping IPs locate relevant data more quickly. Many systems now provide dashboards that pull together structured data like lab results or device exposure, which can be a helpful starting point for surveillance. But infection preventionists still need to review clinical notes, flowsheets, imaging, and various other parts of the medical record to determine whether a case actually meets surveillance definitions.
Real-World Implementation
What governance structure is required before deploying predictive tools?
Holmes: Successful implementation usually requires collaboration between infection prevention teams, clinical leadership, and clinical informatics or IT teams who understand the data infrastructure. Organizations also need clear governance around data security, model validation, and how alerts will fit into clinical workflows.
How important is frontline clinical buy-in?
Holmes: Frontline buy-in is essential. Infection prevention work depends heavily on collaboration with nurses, physicians, and unit leaders, and any tool that adds alerts or changes workflow needs to make sense to the people using it.
What metrics should organizations track to determine whether AI is working?
Holmes: Organizations should look at both operational and quality metrics. For infection prevention teams, that might include things like time spent on chart review, how consistently true cases are identified or missed, and whether alerts lead to meaningful prevention actions. It’s also important to track whether the tool is actually being used and whether it supports existing prevention workflows. The real measure of success is whether the tool helps teams turn data into prevention at the bedside.
Reality Check
What problems in infection prevention does AI NOT solve well today?
Holmes: AI is really starting to help infection prevention teams with things like quickly generating data visualizations, summarizing information for reports, creating educational materials, and supporting surveillance by pulling together structured data such as microbiology results and device tracking. But it is not widely predicting infections or automatically determining whether cases meet surveillance definitions. Infection prevention surveillance still requires clinical interpretation.
Some tools may also help identify patients who appear to be at higher risk earlier in their hospitalization. But identifying risk is only part of the equation. Ultimately, the clinical team has to act on that information. Infection prevention programs already share data about device use and other risk factors, and translating those insights into changes in clinical practice can be challenging. Technology can help surface the risk, but the prevention work still depends on how that information is used at the bedside.
Where is the hype most disconnected from operational reality?
Holmes: A lot of hype assumes that predictive surveillance for infections is already widely implemented in hospitals. In reality, most IP programs still rely heavily on retrospective surveillance and manual review of the electronic chart.
What will likely be possible in 3–5 years that isn’t yet viable?
Holmes: In the next several years, we will likely see more tools that help infection preventionists interpret large amounts of clinical data more efficiently. We could see changes in surveillance definitions that align with more structured criteria that can be extracted directly from the EHR. Advances in natural language processing will likely make it easier to review clinical notes and other unstructured elements of the chart to identify relevant information for surveillance.
We’ll probably also see more predictive models being tested and integrated into existing clinical platforms as vendors work to embed these capabilities within electronic health records and surveillance systems. At the same time, health systems are likely to invest more in AI governance and implementation frameworks to ensure these tools are validated, used safely, and integrated effectively into clinical workflows.
References:
-
Al-Nafjan, Abeer, et al. "Artificial Intelligence in Predictive Healthcare: A Systematic Review." J Clin. Med. https://pmc.ncbi.nlm.nih.gov/articles/PMC12525484/
-
MacKenzie, Matt. "NIH-Funded Researchers Develop AI Model to Assess CT Scans." Healthcare Purchasing News. https://www.hpnonline.com/healthcare-it/news/55362081/nih-funded-researchers-develop-ai-model-to-assess-ct-scans
-
Lubell, Jennifer. "Why doctors should be at the heart of AI clinical workflows." AMA. https://www.ama-assn.org/practice-management/digital-health/why-doctors-should-be-heart-ai-clinical-workflows


