AMA Adopts New Policy Encouraging Transparency Around How AI Tools Arrive at Their Conclusions
The American Medical Association (AMA) has adopted a new policy aimed at “maximizing trust in and increasing transparency around how [augmented intelligence] tools arrive at their conclusions.” Their website has the press release.
The new policy specifically calls for “explainable clinical AI tools that include safety and efficacy data. To be considered explainable, these tools should provide explanations behind their outputs that physicians, and other qualified humans, can access to interpret and act on when deciding on the best possible care for their patients.”
The policy also calls for “requiring an independent third party, such as regulatory agencies or medical societies, to determine whether an algorithm is explainable, rather than relying on claims made by its developer. The policy states that explainability should not be used as a substitute for other means of establishing safety and efficacy of AI tools, such as randomized clinical trials. Additionally, the new policy calls on AMA to collaborate with experts and interested parties to develop and disseminate a list of definitions for key concepts related to medical AI and its oversight.”
The AMA Council on Science and Public Health report that served as the basis for this policy noted that “when clinical AI algorithms are not explainable, the clinician’s training and expertise is removed from decision-making, and they are presented with information they may feel compelled to act upon without knowing where it came from or being able to assess accuracy of the conclusion.”

Matt MacKenzie | Associate Editor
Matt is Associate Editor for Healthcare Purchasing News.