AI Model for Predicting Sepsis Showed Little Benefit Over Human Prediction, Study Shows
New research shows that a proprietary artificial intelligence (AI) software used widely in the United States to determine which patients are at risk for sepsis is unable to correctly determine who is high risk before they receive treatments, providing no help for clinicians.
The tool is called the Epic Sepsis Model, and it is “part of Epic’s electronic medical record software, which serves 54% of patients in the United States and 2.5% of patients internationally.” It is used to automatically generate “sepsis risk estimates in the records of hospitalized patients every 20 minutes, which clinicians hope can allow them to detect when a patient might get sepsis before things go bad.” The need for an effective way to distinguish between high and low risk patients for sepsis is pressing, as Tom Valley, co-author of the study, emphasizes that it “can be really hard to know who can be sent home with some antibiotics and who might need to stay in the intensive care unit” since “sepsis has all these vague symptoms.”
Unfortunately, at this point, it seems like AI does not “seem to be getting more out of patient data than clinicians are.” There existed a “mismatch in the timing between when information [became] available to the AI and when it [was] most relevant to clinicians.” Jenna Wiens, the corresponding author of the study, theorized that the health data the model relied on “encodes, perhaps unintentionally, clinician suspicion” into its decision making.
In order to measure the performance of the AI, the research team “calculated the possibility that the AI assigned higher risk scores to patients who were diagnosed with sepsis, compared to patients who were never diagnosed with sepsis.” This showed that the AI was only correct “62% of the time when using patient data recorded before the patient met criteria for having sepsis. Perhaps most telling, the model only assigned higher risk scores to 53% of patients who got sepsis when predictions were restricted to before a blood culture had been ordered.” Patients will not receive blood culture tests “until they start presenting sepsis symptoms.”
The conclusion that could be drawn from this research is that the “model was cueing in on whether patients received diagnostic tests or treatments when making predictions. At that point, clinicians already suspect that their patients have sepsis, so the AI predictions are unlikely to make a difference.”
The University of Michigan’s website has the news.
Matt MacKenzie | Associate Editor
Matt is Associate Editor for Healthcare Purchasing News.