
By Melissa Lyder
Nothing grasps today’s media quite like artificial intelligence (AI) and its myriads of applications. When it comes to medical imaging, there are real productivity improvements to be seen from emerging AI capabilities in medical devices and software. There are also unique risks from integrating these features into medical imaging systems and the clinical environment at large. So, what does this paradigm say for the future of medical imaging? What may a tech-savvy health IT or healthcare technology management (HTM) leader need to know when discussing the adoption of this technology with clinicians?
For medical imaging, artificial intelligence integration can mean rapid reading and analysis of medical images. AI features of devices currently on the market receive a patient’s scan and run it through a deep learning algorithm using a multilayer convolutional neural network. These algorithms extract characteristic features that define anatomical structures and potential abnormalities. Each feature is extracted at a different layer in the network. Based on these characteristics, the system then classifies the image as affected or unaffected by the specified condition. Along with this classification, many systems provide heat maps of affected areas, and probability of the results.
Processing X-ray images, CT scans, or MRI scans with a machine learning (ML) or AI algorithm for detection of specified abnormalities in the image, saves technologists abundances of time compared to manually reading each image. But just how trustworthy are the detecting capabilities of these systems? Validation for AI models in medical imaging yields high sensitivity (meaning the ability to detect more subtle abnormalities with the potential for early diagnosis), which is sometimes higher than human reading. But does this sensitivity directly translate to clinical use and safety?
Traversing the gap between what an AI model predicts, and what the optimal diagnosis, intervention and treatment is, is one of the most prominent challenges with using AI in medicine.
Use of these models is often projected to reduce radiologists’ increasing workload and address staffing issues. Furthermore, their high sensitivity implies more early detections of disease or conditions, combined with timely interventions, this can save lives. This could be the height of AI’s positive influence in healthcare.
But by the nature of machine learning, even the most technically complex and intentionally designed algorithms are only as intelligent as the data fed to it. Concerns are arising as to the accuracy of the data used to train these rapidly emerging tools. There is also cause for questioning of the size of the datasets used, their breadth, and depth. Do they account for all cases of disease? Do they account for how diseases may manifest differently in different patient demographics? If not, can any algorithm confidently advise a diagnosis?
Generative adversarial networks (GANs) are a class of machine learning models that generate new, synthetic data based on their training data. That is in this case, they create synthetic medical images. GANs may be used by researchers for training data when there is limited applicable data available. That is to say, medical imaging AI features may now be developed based on synthetic medical images.
Training data validation is another major challenge raising reason for caution when considering the accuracy and safety of clinical use of AI.
Image classification is just one application of ML technology in the clinical environment. But these models are quickly gaining popularity on the market. The FDA has approved thousands of medical devices leveraging AI. However, these systems only use locked algorithms, meaning the training of the algorithm takes place prior to implementation and stays the same until a new version is released. The FDA continues to enforce safety in AI-enabled medical devices.
As AI classification models continue to reach the clinical environment, the need for specified standards arises. AAMI TIR34971:2023 is the first document to address risk management of medical devices with artificial intelligence and machine learning components. The guidance essentially describes how to apply ISO 14971 to devices with AI and ML features. ISO 14971 more generally defines risk management of medical devices, including software and cybersecurity risks.
ISO/IEC 5259-2:2024 outlines a framework for managing the quality of the data used to train data supported decision-making models, notably AI and ML models. However, it does not specifically inform medical devices or use of AI in the clinical environment.
Overall, AI’s potential to enhance diagnostic accuracy, alleviate workloads, and increase early disease detection is groundbreaking. Still, its dependence on training data and lack of full clinical feasibility raises critical concerns about safety, reliability, and ethical implications. Because of these concerns, the future of AI in medical imaging will not be determined solely by its capabilities, but by the ethical and technical safeguards put in place to ensure it serves patients and staff safely and effectively. With all of this in mind, professionals must strike a unique balance between progress and caution.

Sources
Artificial intelligence in medical imaging: switching from radiographic pathological data to clinically meaningful endpoints https://www.thelancet.com/journals/landig/article/PIIS2589-7500(20)30160-6/fulltext
How Artificial Intelligence Is Shaping Medical Imaging Technology: A Survey of Innovations and Applications  https://pmc.ncbi.nlm.nih.gov/articles/PMC10740686/
ISO 14971:2019Â Â https://www.iso.org/standard/72704.html
ISO/IEC 5259-2:2024Â https://www.iso.org/obp/ui/en/#iso:std:iso-iec:5259:-2:ed-1:v1:en
ISO/IEC 25024:2015Â https://www.iso.org/obp/ui/en/#iso:std:iso-iec:25024:ed-1:v1:en
Ethical Considerations for Artificial Intelligence in Medical Imaging https://pmc.ncbi.nlm.nih.gov/articles/PMC10690124/#:~:text=Drawing%20on%20the%20traditional%20principles,and%20model%20efficacy%2C%20fairness%20toward
AI/ML in Medical Devices: Regulatory Perspectives https://array.aami.org/content/news/ai-ml-medical-devices-us-eu-regulatory-perspectives
AI Medical Device Standards https://www.hardianhealth.com/insights/every-regulatory-ai-medical-device-standard-you-need-to-know
