AI

UCSF to build novel continuous AI monitoring platform


UCSF Health and the UCSF Division of Clinical Informatics and Digital Transformation, with a $5 million gift from Ken and Kathy Hao, aim to ensure the efficacy and safety of artificial intelligence used in clinical care by building a platform that continuously reports on whether a tool is achieving its intended results, or if it needs improvements.

The researchers said the platform will flag if a tool is potentially dangerous or risks worsening health disparities, prompting immediate action when necessary.

WHY IT MATTERS

Called the Impact Monitoring Platform for AI in Clinical Care, USCF Health and the DoC-IT will design IMPACC to report on the effectiveness of artificial intelligence in clinical decision-making and patient care as well as potential risks to patient health and exacerbating health disparities. 

Once they develop IMPACC, UCSF Health will test its use with a set of AI tools currently used for clinical care, according to an announcement from the researchers.

Julia Adler-Milstein, head of the UCSF DoC-IT, and Dr. Sara Murray, chief health AI officer at UCSF Health, will lead the collaborative effort in order to improve patient care at UCSF while advancing the science of how to assess AI tools in real-world use, they said.

“By building IMPACC, we will take a major leap forward in how we analyze AI’s performance in healthcare,” Murray said in a statement. 

“As we deploy new AI technologies, this novel, scalable platform will provide our health system with direct and actionable insights into ongoing performance, ensuring not only the effectiveness of these new tools but also safety across the system and benefit for patients.”

The researchers said the platform will also be used to guide healthcare leaders in their decisions to scale or stop the use of certain AI tools. We reached out to UCSF to ask if IMPACC, once tested and ready for use, would be available to other health systems. We will update the story with any response.

THE LARGER TREND

Healthcare lacks established protocols for ongoing AI monitoring, the researchers pointed out, thus health systems need a way to identify any issues in their real-world performance in real-time with longitudinal monitoring and specified criteria for escalation and human intervention.

The risk of adverse outcomes for patients and healthcare providers that could go undetected in healthcare AI gone awry is of concern to healthcare organizations, doctors, patients, lawmakers and many others. 

“There are plenty of justifiable worries about what looks to be a new normal built around this powerful and fast-changing technology,” Dr. Sonya Makhni, medical director of the Mayo Clinic Platform and senior associate consultant for the Department of Hospital Internal Medicine, told Healthcare IT News earlier this year.

While the Mayo Clinic Platform has developed a risk classification system to qualify AI before it’s used, health systems employing AI algorithms “should use the AI development life cycle as a framework to understand where bias may potentially be introduced,” she advised healthcare leaders thinking through the safe use of AI.

“It is the responsibility of both the solution developers and the end-users to frame an AI solution in terms of risk to the best of their abilities.” 

ON THE RECORD

“This philanthropic gift is transformative in many ways,” Adler-Milstein said in a statement. “It comes at a critical juncture as the healthcare industry more broadly integrates AI into clinical practice.” 

“This is the first partnership between UCSF and UCSF Health on AI monitoring,” Suresh Gunasekaran, UCSF Health president and chief executive officer, added.

“Together, we are uniquely positioned to create the first effective model platform across health systems in the United States that will offer real-time visibility into AI tool performance and clinical impact.”

Andrea Fox is senior editor of Healthcare IT News.
Email: afox@himss.org

Healthcare IT News is a HIMSS Media publication.



Source

Related Articles

Back to top button