AI

Blog: The Promise Artificial Intelligence Holds for Improving Health Care


AI Lifecycle

By: Troy Tazbaz, Director, Digital Health Center of Excellence (DHCoE), Center for Devices and Radiological Health, U.S. Food and Drug Administration

Troy Tazbaz, Director, Digital Health Center of Excellence
Troy Tazbaz, Director, Digital Health Center of Excellence

Artificial intelligence (AI) is rapidly changing the health care industry and holds transformative potential. AI could significantly improve patient care and medical professional satisfaction and accelerate and advance research in medical device development and drug discovery.

AI also has the potential to drive operational efficiency by enabling personalized treatments and streamlining health care processes.

At the FDA, we know that appropriate integration of AI across the health care ecosystem will be paramount to achieving its potential while reducing risks and challenges. The FDA is working across medical product centers regarding the development and use of AI as we noted in the paper, Artificial Intelligence & Medical Products: How CBER, CDER, CDRH, and OCP are Working Together.

DHCoE wants to foster responsible AI innovations in health care while ensuring these technologies, when intended for use as medical devices, are safe and effective for the end-users, including patients. Additionally, we seek to foster a collaborative approach and alignment within the health care ecosystem around AI in health care.

AI Development Lifecycle Framework Can Reduce Risk

There are several ways to achieve this. First, agreeing on and adopting standards and best practices at the health care sector level for the AI development lifecycle, as well as risk management frameworks, can help address risks associated with the various phases of an AI model.

This includes, for instance, approaches to ensure that data suitability, collection, and quality match the intent and risk profile of the AI model that is being trained. This could significantly reduce the risks of these models and support their providing appropriate, accurate, and beneficial recommendations.

Additionally, the health care community together could agree on common methodologies that provide information to a diverse range of end users (including patients), and on how the model was trained, deployed, and managed through robust monitoring tools and operational discipline. This includes proper communication of the model’s reasoning that will help build trust and assurance for people and organizations for successful adoption of AI.

Enabling Quality Assurance of AI in Health Care

To positively impact clinical outcomes with the use of AI models that are accurate, reliable, ethical, and equitable, development of a quality assurance practice for AI models should be a priority.

Top of mind for device safety is quality assurance applied across the lifecycle of a model’s development and use in health care. Continuous performance monitoring before, during, and after deployment is one way to accomplish this, as well as by identifying data quality and performance issues before the model’s performance becomes unsatisfactory.

How can we go about achieving our shared goals of assurance, quality, and safety? At the FDA, we’ve discussed several concepts to help promote this process for the use of AI in medical devices. We plan to expand on these concepts through future publications like this one, including discussing the following topics:

  • Standards, best practices, and operational tools
  • Quality assurance laboratories
  • Transparency and accountability
  • Risk management for AI models in health care

Generally, standards, best practices, and tools can help support responsible AI development, and can help provide clinicians, patients, and other end users with quality assurance for the products they need.

Principles such as transparency and accountability can help stakeholders feel comfortable with AI technologies. Quality assurance and risk management, right-sized for health care institutions of all sizes, can help provide confidence that AI models are developed, tested, and evaluated on data that is representative of the population for which they are intended.

Shared Responsibility on AI Quality Assurance is Essential to Success

Efforts around AI quality assurance have sprung up at a grassroots-level across the U.S. and are starting to bear fruit. Solution developers, health care organizations, and the U.S. federal government are working to explore and develop best practices for quality assurance of AI in health care settings.

These efforts, combined with FDA activities relating to AI-enabled devices, may lead to a world in which AI in health care settings is safe, clinically useful, and aligned with patient safety and improvement in clinical outcomes.

Our “virtual” doors at the DHCoE are always open, and we welcome your comments and feedback related to AI use in health care. Email us at digitalhealth@fda.hhs.gov, noting “Attn: AI in health care,” in the subject line.

Previous blog:

 



Source

Related Articles

Back to top button