By Steven Hughes
Advances in Artificial Intelligence (AI) and Machine Learning (ML) have drastically improved the speed, scale and automation of many tasks not only in medical devices, but also in many aspects of technology. Currently in medical devices AI/ML has been in use for years in diagnosing arrythmias by utilizing algorithms based on known data sets, EKG analysis, medical transcription and providing initial readings and findings in cardiology, dermatology, laboratory medicine, oncology, pathology and radiology. All of which require confirmation of the analysis by a clinician. These solutions are currently hard coded, but newer iterations will soon hit the market working on updated data sets, increased automation of processes and may even adapt and learn from real-world use and evolve over time from its original intended use. To ensure the integrity of the analysis and data used there are some great guidelines recently made available.
FDA
Currently any AI/ML-based software that is intended to treat, diagnose, cure, or prevent disease or other conditions are considered a “Software as a Medical Device” (SaMD) under the Food, Drug and Cosmetics Act. The U.S. Food and Drug Administration (FDA) has authority to regulate SaMDs and has implemented several policies intended to guide the industry in safe practices for the development and use of SaMDs in the U.S.
The FDA has also recently issued guidance specific to AI/ML in medical devices. The FDA, in cooperation with Health Canada and United Kingdom’s Medicines and Healthcare products Regulatory Agency, released “Guiding Principles” for the use of AI/ML in medical devices on Oct. 27, 2021. The Guiding Principles are not regulations, but guidelines for the safety and efficacy in medical devices that employ AI/ML and provide a starting point for the health care industry to work together to promote Good Machine Learning Practices. To date, the FDA has cleared ~700 AI/ML-based SaMDs with most devices in Radiology (77%), Cardiology (10%), Neurology (3%) and Hematology (2%) with more being added every month (see QR code 1).
In January 2021, the FDA released the Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan (see QR code 2) that includes a framework for “Algorithm Change Protocol” to implement changes in a controlled manner that manages risks posed to patients focusing on SaMD which is also relevant to Software in a Medical Device (SiMD). AI/ML functionality and development has exponentially increased in recent months because of the development of large language models (LLMs). LLMs are AI models that are trained on very large datasets, enabling them to sift, recognize, summarize, translate, predict and generate content on. Some familiar LLM platforms are ChatGPT, Claude, Llama, PaLM, etc. Currently, no device that uses generative AI or artificial general intelligence (AGI) or is powered by LLM has been approved.
EXECUTIVE ORDER
President Joe Biden on Oct. 30, 2023, signed an executive order (EO) called the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (see QR code 3) to establish the first set of standards for using AI in health care and other industries. Just like how advances in technology and use of the Internet have dramatically changed the landscape of health care, the use of AI and ML are destined to do the same. The EO seeks to find that balance between managing potential risks of AI while encouraging innovation that can benefit consumers and the EO’s many directives depend heavily on agencies and companies that have been called to assist with their development and rollout.
The EO uses the definition of AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.” The scope of the EO is not just limited to generative AI or ML technologies instead, the EO has the potential to affect any machine-based system that can make decisions, recommendations or predictions based on data. The EO outlines principles that agencies must follow when designing, developing, acquiring, securing and using AI in the federal government.
HSCC
The Health Sector Coordinating Council (HSCC) published a 35-page white paper: “Health Industry Cybersecurity-Artificial Intelligence Machine Learning (HIC-AIM)” which provides an overview and discussion of 9 specific cybersecurity considerations for the implementation of AI /ML. Covering cybersecurity signal detection, continuous monitoring, evaluation, patching, disposition and considerations when implementing an AI/ML system in a health care environment while ensuring that the system has an Explainable Artificial Intelligence (XAI), an understanding of how it works for its developers and users and validating quality data to confirm correct results (see QR code 4).
GOVERNING AI
There are also several privacy and security related concerns that need to be considered that fall under Health Insurance Portability and Accountability Act (HIPAA) regulations, the U.S. Federal Trade Commission’s (FTC) and HHS health breach notification regulations, the EU’s General Data Protection Regulation (GDPR) and various state laws and regulations that require heightened privacy and security mechanisms for certain classes of information like genetic information.
The FDA oversees AI/ML in health care, but there are additional laws, regulations and guidance to be considered that can set off a chain of reaction to any changes of intended use. When a variation of a software product is used for diagnosis or treatment, the process of reimbursement will also need to be reviewed by a Medicare Administrative Contractor (MAC). If a new product is “similar” to another product already on the market an analysis of Current Procedural Terminology (CPT) and Healthcare Common Procedure Coding System (HCPCS) codes like IDC-10 will provide potential reimbursements and an analysis of Local and National Coverage Determinations will need to be performed. Medical Device Manufacturers (MDM) must consider the totality of these laws and regulations for their SaMD and build their software with the relevant privacy and security protections.
The NIST GAI Public Working Group was launched in June 2023 to help address the opportunities and challenges associated with AI that can generate content, such as code, text, images, videos and music. The public working group will also assist NIST develop key guidance to help organizations address the special risks associated with generative AI technologies and the potential to revolutionize many industries and society working with the National Artificial Intelligence Advisory Committee more information on their work can be found at https://ai.gov/naiac/.
SECURITY CONCERNS & DATA POISONING
The promise of AI/ML introduces vulnerability to intentional attacks that involve data poisoning, model replication, evasion and exploitation of traditional software flaws designed to alter, deceive, manipulate, compromise and render them ineffective. Organizations adopting AI/ML systems need to be aware of these potential vulnerabilities before implementing. AI/ML also presents other challenges including potential bias based on the algorithms used that are introduced by utilizing very large data sets and the difficulty of evaluating the safety of AI-driven products and services while maintaining privacy and security of the data involved. These must be continuously evaluated and assessed to ensure the safety and integrity of AI/ML driven products and services.
SaMD utilization for diagnosis and treatment may also require the access, storage, and use of personal identifiable data, depending upon the organization creating the data, from which country the data is collected, how it is regulated, and what U.S. federal and state laws, and international laws and regulations may also come into play. It is imperative that medical device manufacturers and software development consider the totality of these laws and regulations for their SaMD and build AI/ML software with relevant privacy and security protections and be transparent with their customers.
FUTURE USE OF AI
AI/ML promises to efficiently assist clinicians in their day-to-day activities to compile, review and analyze data gathered on their patients, provide workflow analysis, and make educated inferences as to what that data means to the patient and thus how to treat the patient or cure the disease. The applications of AI/ML in healthcare technology are potentially endless but with this potential, there is also risk. In managing this risk involves a full understanding and proper implementation of the broad array of laws, regulations and guidelines that govern these new technologies to provide safe secure equitable efficient health care with full transparency to end users and patients. AI/ML has the potential to access multiple sources of compiled data to reveal known patterns in disease and provide a proper assessment and treatment as well predict an individual’s potential risk of future diseases and provide a preventative care plan all while analyzing and optimizing workflows to maintain safety, privacy and security, reducing costs, reducing wait times, reducing turnaround times of treatment, and improving overall health care delivery. The advancement of AI/ML in health care is inevitable and it is our responsibility to shape its future and proper use.


