By AAMI
FDA Debuts Plans for Artificial Intelligence-Based Medical Software
On January 12, the U.S. Food and Drug Administration (FDA) released its Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan. The action plan describes a “multipronged approach to advance the agency’s oversight of AI/ML-based medical software.”
AI/ML technology has “the potential to transform health care by deriving new and important insights from the vast amount of data during the delivery of health care every day,” while AI/ML-based software has appropriate regulatory oversight so that it “delivers safe and effective functionality,” the document reads.
“This action plan outlines the FDA’s next steps towards furthering oversight for AI/ML-based SaMD,” said Bakul Patel, director of the Digital Health Center of Excellence in the Center for Devices and Radiological Health (CDRH). “The plan outlines a holistic approach based on total product life cycle oversight to further the enormous potential that these technologies have to improve patient care while delivering safe and effective software functionality that improves the quality of care that patients receive.”
The FDA also expressed an expectation for transparency and real-world performance monitoring that could enable evaluation and monitoring of a software product from premarket development through postmarket performance.
The action plan, which Patel said is expected to evolve over time, was developed in direct response to feedback from a 2019 FDA discussion paper that provided a proposed regulatory framework for AI/ML-based SaMD.
As part of the action plan, the FDA is having liaisons participate in the ongoing standardization efforts of the AAMI AI committee. The committee is currently collaborating with BSI to create new risk management standards for AI/ML use in medical devices.
“Outlining good practices specifically for the risk management of artificial intelligence is important because data-driven systems can reach conclusions that subvert human expectations,” said Emily Hoefer, senior manager of shared services at AAMI. “The FDA’s participation in developing this guidance helps the AI/ML community as a whole ensure patient safety even while staying in compliance with accrediting bodies.”
The development of guidance on the application of risk management for AI/ML is a result of one of the seven recommendations made in the 2020 AAMI and BSI white paper, Machine Learning AI in Medical Devices: Adapting Regulatory Frameworks and Standards to Ensure Safety and Performance. Further AI/ML guidance documents is being developed by the AAMI and BSI collaborative based on the recommendations in the white paper, such as establishing a new Good Machine Learning Practice (GMLP), an important aspect of the FDA’s action plan.
The FDA action plan includes five actions and goals in total:
- Updating the proposed framework for modifications to AI/ML-based SaMD through a draft guidance to include stakeholder’s feedback to FDA following the 2019 discussion paper and request for feedback.
- Encouraging the development of good machine learning practice (GMLP) and its harmonization along with facilitating oversight through manufacturers adherence to GMLP.
- Developing a patient-centered approach incorporating transparency for users and increased attention to how AI/ML-based technologies interact with people, to include users and patients more broadly. The agency intends to hold a public workshop on how device labeling supports transparency and enhances user trust.
- Supporting regulatory science methods related to algorithm bias and robustness to include the identification and elimination of biases known to exist in terms of socioeconomic status, ethnicity and race. The work will be done at FDA’s Centers for Excellence in Regulatory Science and Innovation (CERSI).
- Clarifying real world performance (RWP) data, monitoring for AI/ML software, and adopting a total product life cycle (TPLC) approach to AI/ML-based (SaMD).
AAMI Addressing AI Risk Management
The AAMI AI committee is now collaborating with BSI and representatives from the FDA to address the development of AI risk management guidance. According to Joe Lewelling, senior advisor on content and strategy at AAMI, this was a particularly high priority for the groups, because it directly impacts the safety of users and patients.
“The uniqueness of AI creates a different risk profile than your average medical device – one where some risks may be hard to quantify or even unknown,” he said. “The guidance will help rectify that problem without reinventing the wheel.”
So that the new AI guidance can be adapted quickly, the AAMI AI committee is borrowing heavily from familiar medical device risk management practices. In particular, the document will serve as guidance for applying the internationally used device standard ANSI/AAMI/ISO 14971:2019 to AI technologies. The new AI guidance will be available for public comment in early spring, 2021.
Collecting the AI Stories of Today
AAMI is taking steps to better understand how artificial intelligence and machine learning are being used in the health technology space right now. In collaboration with the American College of Clinical Engineering and the Healthcare Information and Management Systems Society, AAMI recently collected statistics and qualitative data regarding AI use in hospitals.
“We are conducting research to better understand how health delivery organizations and hospitals are currently utilizing AI to improve health care outcomes and safety,” said Danielle McGeary, vice president of HTM at AAMI. “We were encouraged to find many organizations willing to share their AI story.”
Earlier this year, representatives from the joint organization – dubbed the Health Technology Alliance (HTA) – hosted short, remote interviews with professionals who have utilized AI for:
- Monitoring Equipment Utilization
- Predictive Maintenance and Other Monitoring
- Alarm Management
- Bed management
- Clinical Decision Support Systems
- IE: Advanced analytics on large clinical data sets
- Other unique applications
Results from these will be used to determine future HTA educational/resource offerings and potentially as part of a research presentation at the upcoming International Clinical Engineering and Healthcare Technology Management Conference in September of 2021.