Joint Commission, CHAI Provide Framework for AI Use in Clinical Settings

Recommendations relevant to hospitals of all sizes and AI maturity levels.
Oct. 1, 2025
3 min read

Key Highlights

  • Establish clear AI policies and governance structures tailored to hospital workflows and resources.
  • Form multidisciplinary AI governance groups including clinical, IT, and risk management staff to oversee implementation.
  • Address high-risk areas like operating rooms and sterile processing with strict validation, oversight, and staff training to mitigate critical errors.

The Joint Commission and the Coalition for Health AI (CHAI) last month published their first joint guidance aimed at helping U.S. health systems adopt artificial intelligence (AI) safely, effectively, and ethically. The document, Guidance on Responsible Use of AI in Healthcare (RUAIH), provides high-level recommendations designed to be relevant to hospitals and health systems at all stages of AI adoption.

Key principles outlined in the guidance include establishing clear policies and governance for AI use, validating tools locally before deployment, monitoring AI performance continuously, and integrating these practices into existing organizational workflows based on available resources.

Looking ahead, the Joint Commission and CHAI plan to release governance playbooks in 2025 and 2026, offering practical implementation strategies informed by workshops and feedback from hospitals of varying sizes and capabilities. A voluntary AI certification program is also in the works for the Joint Commission’s more than 22,000 accredited organizations.

“We understand how quickly AI is changing healthcare and at a scale I’ve never seen in my time as a leader,” said Dr. Jonathan Perlin, president and CEO of the Joint Commission. “From the moment we announced our partnership with CHAI, we knew we wanted to reflect that fast-paced dynamic while still delivering thoughtful, streamlined guidance for healthcare organizations to self-govern with AI.”

The new guidance has direct implications for high-risk hospital areas such as operating rooms (ORs) and sterile processing departments (SPDs). In SPDs, AI tools could predict sterilization failures, optimize instrument cycles, or flag maintenance needs, all requiring strict validation and ongoing oversight. In ORs, AI may support case scheduling, forecast instrument use, predict equipment wear, or assist surgeons in planning instrument sets and anticipating complications.

Experts caution that mistakes in these environments can be critical. Questions of liability, and whether the hospital, the vendor, or the clinician is responsible, remain unresolved. Staff acceptance is another hurdle, as some may resist or mistrust AI tools, highlighting the importance of structured change management.

To prepare, OR and SPD leaders are encouraged to form AI governance groups with representation from clinical, IT, quality, and risk management teams. These groups should take stock of existing inventory and potential AI tools, establish local validation frameworks, define fallback and override procedures, implement audit and monitoring systems, train staff, and integrate AI review into risk management processes. Participation in feedback loops with the Joint Commission and CHAI will also be critical as the guidance evolves.

The Joint Commission’s guidance represents a proactive effort to help hospitals navigate the rapidly evolving AI landscape while prioritizing patient safety, regulatory compliance, and operational effectiveness.

About the Author

Daniel Beaird

Editor-in-Chief

Daniel Beaird is Editor-in-Chief for Healthcare Purchasing News.

Sign up for Healthcare Purchasing News eNewsletters