A new study showcases promising avenues for the use of AI in sepsis research. CIDRAP has the news.
The study found that an LLM “was able to extract presenting signs and symptoms of sepsis from the admission notes of more than 93,000 patients with accuracy that was equal to that of physicians performing a manual medical review. The LLM also identified symptom-based syndromes that correlated with infection sources, risk for antibiotic-resistant organisms, and in-hospital mortality.”
Most studies “assessing the association between antibiotic choice, timing, and outcomes [for sepsis] haven’t used signs and symptoms as variables because extracting that information requires ‘laborious and subjective medical reviews.’” However, the authors of this study used a proprietary LLM to “extract up to 10 presenting signs and symptoms from the history-and-physical admission notes of 104,248 patients with possible infection” and then validated the labels by comparing the results with “a manual review of a random sample of 303 admission notes by an infectious disease physician.”
The LLM achieved an accuracy of 99.3%, “balanced accuracy of 84.6%, positive predictive value of 68.4%, sensitivity of 69.7%, and specificity of 99.6% compared with the physician medical record reviewer.” Analysis of the “10 most common sepsis signs and symptoms identified by the LLM produced seven syndromes corresponding to four sites of infection (skin and other soft tissue, cardiopulmonary, gastrointestinal, and urinary tract) that were directly correlated with ECD-10-CM discharge diagnosis codes that corresponded to infections at those sites.”
A commentary published in the same journal as the study seeks to temper expectations, however. The authors say that at this point the LLM is probably “better suited to automating simple tasks, such as the extraction of signs and symptoms, than participating in clinical decision-making.” Still, this tool could prove useful for clinicians.