A new framework for ensuring AI systems used in healthcare are “developed and deployed ethically, transparently, and with patient equity at the forefront” has been published in the Journal of Medical Internet Research (JMIR).
The framework, called “Scalable Agile Framework for Execution in AI (SAFE AI),” provides “practical guidance for small and medium sized enterprises building medical AI technologies. It integrates ethical checkpoints directly into standard development workflows, helping organizations proactively identify and mitigate potential biases before they affect patient care.”
Warren Pettine, the senior author of the publication, wrote that AI is “increasingly shaping how clinicians make decisions in mental health care, from crisis triage to treatment recommendations.” The roadmap will help to keep those systems fair, transparent, and monitored.
The paper itself says specifically that SAFE-AI “simplifies when and how ethical review is triggered and documented, making responsible AI practices feasible even in environments with limited ethics, governance, or compliance resources. SAFE-AI assumes the presence of qualified data scientists and engineers, and it does not replace the need for statistical or technical expertise but instead provides a lightweight structure for coordinating and documenting work that those experts already perform.”