By Rob Schluth


In February 2025, ECRI convened a panel to discuss the No. 1 Hazard – “Risks with AI-Enabled Health Technologies” – highlighted in the 2025 edition of the organization’s Top 10 Health Technology Hazards report. Panelists outlined some of the benefits and risks associated with the use of AI solutions in healthcare, and described strategies for minimizing those risks. Excerpts from that discussion are presented below.
Scott R. Lucas: “For AI to help us achieve the goal of zero preventable harm, the governance and risk management policies associated with a Total Systems Safety approach need to be established now, as more and more devices are hitting the marketplace.”
Lucas set the stage for ECRI’s program by noting that AI is revolutionizing healthcare and by expressing the hope that, if directed appropriately, the technology can help us achieve zero preventable harm. Currently, all stakeholders – system designers, providers, payers, regulators, and others – are balancing innovation and speed-to-market with the need to ensure that safe medical devices are in the hands of providers. That shared burden, along with the natural tension between those two goals, are foundational to why ECRI chose to highlight “Risks with AI-Enabled Health Technologies” in the 2025 edition of its Top 10 Health Technology Hazards report.
Jillian Hillman: “An AI model that is not well matched to the problem the organization is trying to solve will yield disappointing results.”
Hillman expanded on the importance of governance and oversight. Key themes discussed throughout the program include the need to assess risks, manage expectations, and monitor performance over time – activities that would be directed by an organization’s AI governance committee.
In a nutshell, the committee would be responsible for confirming that an AI solution remains aligned with a facility’s core ethical principles, and that it is deployed in a manner that improves patient outcomes without causing harm. If the organization does not have a clear idea of the problem that it hopes to solve with an AI solution, and if it has not defined metrics for success, it will not be able to assess whether the solution is meeting those goals.
Christie Bergerson: “One of the major keys to successful implementation is pairing AI with the right human expertise.”
In this segment of the program, Bergerson addressed some of the beneficial ways that AI is transforming patient care and some of the strategies that can facilitate success. She noted that: “AI performs best when working alongside skilled clinicians who understand its strengths and weaknesses. If the pairing isn’t right – if a clinician is too reliant on AI or isn’t familiar with its limitations – performance can drop dramatically.” In other words: “A good human-AI pairing is equal to more than the sum of its parts. A bad pairing can be worse than either alone.”
Bergerson also cautioned about some of the risks: Even the best AI models can drift in performance over time, especially if there are changes to things like patient populations, imaging protocols, supporting equipment, or workflows. Also, the technology isn’t foolproof. AI can sometimes “hallucinate” – producing confident but incorrect recommendations.
For these reasons, AI tools need ongoing monitoring to ensure they remain accurate, and human oversight is critical. ECRI considers it essential that humans remain “in the loop” – that is, that clinicians check AI-generated results and that they guard against complacency, as can occur when an AI solution performs well most of the time.
Jim Martucci: “Regulatory clearance provides some level of comfort, but there are aspects of AI systems that regulators can’t get at.”
Following up the discussion about potential risks, Martucci addressed aspects of an effective risk management approach to AI. Regulatory clearance can be helpful, but clearance alone is not sufficient. For one thing, as noted, AI performance can vary (or drift) over time. For another thing, many AI products used in the healthcare environment would not be classified as medical devices, and thus would not be regulated as such. Nevertheless, such systems can impact patient care.
One key factor to consider is explainability – that is: Can you explain how the model works, or is the model so sophisticated that you’re fully relying on the tool itself? Another is the clinical risk profile, which is an assessment of the risk level associated with the solution, considering factors such as the patient’s condition (from non-serious to critical) and the significance of the information that the AI tool is providing (from simply informing clinical management to treating or diagnosing the patient).
Francisco Rodriguez-Campos: “When evaluating AI-enabled technologies, the question you need to answer is: Does the AI solution work better than the previous process?”
During a discussion of ECRI’s approach to evaluating the AI capabilities of medical devices that it tests, Rodriguez-Campos noted that, while the technology behind AI solutions may be complex, the concept behind how to evaluate those solutions doesn’t have to be. Many AI solutions aren’t doing something new. Rather, they are helping to make an existing process faster, or more efficient, or more accurate. So, healthcare organizations need to assess whether the AI solution is achieving that goal.
To do that: Processes need to be established to monitor the system’s performance over time. Thus it’s important to get a good baseline of your performance before the AI solution is implemented into the workflow, so that you have a means of comparison. Then, track metrics related to, for example, accuracy, turnaround time, uptime, cost savings, or other factors that align with your goals for implementing the system. Also, look for safety concerns, such as hallucinations, and gauge user satisfaction to identify whether the tool is meeting user’s expectations or needs.
ECRI concluded the program by stressing the importance of maintaining a record of adverse events and near misses. You can’t fix a problem if you don’t know it exists, which is why event reporting is so critical to safe patient care. However, current event reporting systems may not be well suited to capturing problems associated with AI technologies. For instance, users may not recognize when or if AI functionality has contributed to an event. ECRI encourages more research into this area.
To view the full webcast, visit: https://ly.ecri.org/LabWebcast2025-AI-Risks.

