As humans, we make risk evaluations every day. Do we eat unhealthy food versus healthier alternatives? Do we use tobacco products? Do we visit high-crime areas? Do we make a home electrical repair without first shutting off the power at the breaker box?
While most of these decisions would seem to come down to common sense; they are conscience decisions we make just the same. They are based on personal experience, deeply held beliefs, empirical evidence, accumulated knowledge or published historical data.
In the world of investing, it is common to compare risk with reward and gauge what level of risk one is willing to accept with the hope of rewards. This is called a risk tolerance profile and every individual is unique in this regard. The investment community has devised many measures of risk. Stocks, mutual funds and exchange-traded funds all feature a risk profile. In daily living, we seek out best practices and avoid unneeded risk whenever possible.
With medical devices, the safety and security of the device has to be carefully managed. There is little room for risk and the evaluation of each device for risk has to be made using a ranking system and hierarchy based on the critical nature of that device. Risk has to be managed throughout the product life cycle.
A measure of risk in a medical device is the potential of an adverse event occurring with that device. The severity and probability of an adverse event is then quantified into a score. The score reflects the likelihood of an adverse event and the potential consequence resulting from that event.
On a continuum, the range of risk can vary from negligible to catastrophic, and the probability for any given device could range from improbable to probable.
Risk assessment can rely on a number of “tools” including manufacturer disclosures (MDS2), the “Medical Device Innovation, Safety and Security Consortium (MDISS) Medical Device Risk Assessment Platform,” FDA resources and ANSI/AAMI/ISO 14971:2007/(R)2016 “Medical devices – Application of risk management to medical devices.” To measure cybersecurity risk, there is also “The Common Vulnerability Scoring System (CVSS).”
This rubric was developed by MITRE Corporation under contract to the FDA, in collaboration with medical device manufacturers, health care delivery organizations, security experts and safety/risk assessment experts.
Part of standardizing a risk scoring system is the acceptance and understanding of the terms involved – such as residual risk, risk management, risk estimation, harm or severity.
The process of risk management for medical devices involves several steps including risk analysis, risk evaluation, risk control, the evaluation of residual risk acceptability, a risk management report and risk monitoring analytics. The analytics include production and post-production information.
In the past, a scoring system developed by Larry Fennigkoh and Bridget Smith was widely embraced by HTM departments and published by The Joint Commission. The guide included categories that allowed for scoring based on things like equipment function, physical risks associated with clinical application and/or maintenance requirements.
The equipment function category provided the application of scoring from 1 to 10. Life support equipment would be an example of equipment receiving the highest score. The physical risks category allowed for a five-point scale. For instance, if a piece of equipment was capable of causing death, it would be scored as a five; if it would cause patient or operator injury; a four and then with less risk; a lower score.
The maintenance requirements category was also scored on a five-point scale and topped out with equipment that required extensive maintenance receiving a five-score and moving down to equipment requiring average maintenance at three and further down.
When taken together, the resulting score under each of the three categories provided a score aggregate, with a maximum potential score of 20. The number was referred to as an equipment management (EM) number. An EM number of 12 or higher meant that the piece of equipment should be included in an equipment management program. Those devices with scores of 11 or less could be excluded from such a program.
Under the equipment maintenance component (or planned maintenance [PM]) of the approach, those devices scoring a four or five would get maintenance every six months, and those devices scoring a one to three would be maintained every 12 months.
“The Fennigkoh-Smith algorithm has long been used as a ‘risk assessment’ tool, even though its primary author, Larry Fennigkoh, does not regard it as such. The definition of ‘risk’ that is recognized by our clinical colleagues, and by professionals in other domains, is that ‘risk’ is a combination of the ‘probability’ of an adverse event and the ‘consequence’ of that event. Under that universally accepted definition, the pioneering Fennigkoh-Smith method does not actually calculate risk,” says Matthew Baretich, P.E., Ph.D., president of Baretich Engineering Inc. in Fort Collins, Colorado.
In June of 2020, Baretich co-presented the Webinar Wednesday session “Don’t Take Risks with Medical Device Risk Scoring” with Carol Davis-Smith, CCE, FACCE, AAMIF, president of Carol Davis-Smith and Associates LLC. The webinar was sponsored by Nuvolo.
“For risk assessment of medical devices, the ‘consequence’ of a device failure is usually assessed qualitatively rather than quantitively. This can range from death and serious harm on the high end to inconvenience and negligible effect on the low end,” Baretich says.
He says that the “probability” of a medical device failure can be assessed quantitatively by looking at CMMS data.
“If your CMMS is configured effectively and if the data in the CMMS are accurate, failure rates can be easily calculated for various equipment types or for individual devices,” Baretich adds.
Some time after the Fennigkoh and Smith approach was introduced, the American Society for Healthcare Engineering (ASHE) introduced its own algorithm or approach. That formula was more complicated than the Fennigkoh-Smith algorithm. Both approaches could be found as components of some CMMS software.
More recently, The Joint Commission and other accrediting organizations have required all equipment to be included in an equipment management program. The requirements have also changed for PM requirements. The Centers for Medicare and Medicaid Services (CMS) and The Joint Commission each have all-encompassing standards. An alternative equipment maintenance (AEM) program normally addresses these requirements.
The Joint Commission uses the terms “seriousness” and “prevalence” of harm. These terms allow for a clearer measure of risk. These terms can be applied to whole devices as well as to note an analysis of specific failures that are more likely with one component of a device.
Severity ratings can be somewhat subjective while probability data can be found in CMMS. This is where failure data can be found. Some CMMS automatically calculate the mean time between failures (MTBF) statistics.
Some other areas within health care where quantifying risk are important include cybersecurity, emergency preparedness, patient risk score and community health.
While most of the media attention on the health care industry in 2020 was focused on the response to a serious viral illness, the story that was mostly missed was the steep spike in cyberattacks on hospitals. The COVID-19 pandemic emphasized the need for standardized practices in HTM.
“The pandemic has meant that HTM work is carried out under difficult conditions and with increasingly limited resources. One effect is that AEM programs have become standard practice. A well-crafted AEM program can reduce resource requirements for PM, which saves time and money. It can also keep the levels of equipment safety and effectiveness high and, in some cases, increase them,” Baretich says.
He says that a critical aspect of an AEM program is having a metric in place to monitor the effectiveness of the PM program.
“CMS regulations and accreditation standards require us to make sure that putting equipment into our AEM program does not reduce safety and effectiveness. We need a performance metric to track how we’re doing,” Baretich says.
He adds that an emerging metric is “MTBFPM.”
“This is based on the familiar MTBF (mean time between failures) metric that our CMMS software should already be configured to compute automatically. But, as we know, not all equipment failures can be mitigated by better PM (s). For example, infusion pump failures caused by incorrect programming cannot be made less likely by more PM(s). To focus on PM-specific performance, MTBFPM tracks the mean time between PM-related failures only,” Baretich says.
Baretich explains that to implement the MTBFPM metric, one needs CMMS data that (a) accurately records medical equipment failures and (b) distinguishes between PM-related failures and other types of equipment failures. A recent AAMI-sponsored CMMS collaborative project recently tackled that issue, as described in a white paper (www.aami.org/HTM/htm-resources/cmms-collaborative-white-papers). In addition to the challenges brought about by the pandemic, the risk inherent in adding medical devices to a network continue to escalate.
“There has been constantly increasing focus on cybersecurity year on year. While there hasn’t been a widespread event like WannaCry and NotPetya in 2017, there has been an increase in the incidence of ransomware at hospitals. Ransomware is typically targeted on organizations that may have weak security but have mission critical computing systems that might make them willing to pay rather than go through a lengthy restore process,” says Ken Hoyme, senior fellow, global product cybersecurity, global technology and services at Boston Scientific in St. Paul, Minnesota.
He says that as a result hospitals have higher expectations regarding the security of the devices that they bring onto their networks. Contract language is specifying patching expectations and communications of security test results. The MDS2 form, that is asked for, has been updated with many more questions about the device’s security configuration and controls. So, there is more transparency between manufacturer and health care delivery organizations regarding security information.
“The FDA has been communicating their expectation that threat modeling — also known as security architecture analysis — should be performed during product development and as a means for assessing new vulnerabilities/exploits against fielded equipment. This has resulted in the industry increasing use of these techniques. The FDA funded the Medical Device Innovation Consortium (MDIC) and MITRE to develop a threat modeling boot camp, and later this year a threat modeling playbook to assist industry in consistently applying these techniques,” Hoyme says.
This approach means that the manufacturer considers the current known threats during product design.
“When we develop new products or encounter changes in a product’s operating environment, we take a step back to the initial design from a cybersecurity standpoint – threat modeling. Similarly, when new vulnerabilities emerge, we look at these from a threat standpoint first,” says Jaap Qualm, vice president of product cybersecurity for GE Healthcare.
He says that from there, the next step is to understand the associated risks and ensure implementation of the appropriate security controls to address the risks.
“The need to continuously step back to these fundamentals of the risk function is not only applicable to the medical device manufacturers in their design and support processes, but also to health delivery organizations in the operation and networking/connectivity of the medical devices. The need to prioritize security is now more critical than ever to protecting health care, combined with a greater need for close collaboration within the health care sector. The better we all work together, the more effective we will be in achieving our common goals of ensuring confidentiality, integrity and availability in this sector,” Qualm says.
Hoyme adds that the pandemic has increased the use of telehealth and remote device support. This is increasing the potential threats, which have to be addressed as these technologies are developed and put in use.
He also suggests the MITRE rubric as a source for best practices in this area.
“MITRE has released a ‘rubric’ to allow CVSS scoring to be more readily applied to medical devices,” he says.
“Deployment of tools to allow for remote device software distribution and security log collection is getting more traction. With the need to patch more frequently, the cost of deployment (the last mile problem) if each device needs to be visited by a service tech, can be a backpressure on how frequently patches are done. Of course, if remote software update is the reason to ‘add’ connectivity, the threat surface is greater and the need for patching becomes more important. But traditional service models are not sustainable if several patch cycles per year becomes a norm,” Hoyme adds.
Another best practice in this area is that medical device manufacturers should apply systemic threat-based security risk analysis in device designs, and health care delivery organizations should work with manufacturers to apply this in their operational security practices, according to Qualm.
“Throughout this space of shared responsibility, all should have a process to understand the assets, apply structured threat modeling, understand and minimize vulnerabilities and implement appropriate risk-based controls from a well-defined controls catalog,” he says.
“After implementation, focus should go to life-cycle support processes and mechanisms for transparency of risk information and solutions. Operationally, we do offer managed security services backed by our extensive knowledge of medical devices, which can be a great help in ongoing management of existing risks and new threats as they come up in the health care delivery organization’s clinical network environment,” Qualm adds.
Determining risk and applying safeguards allows medical devices to provide safe and reliable operation throughout the life cycle of each device. Mitigating risk that is introduced by connections to the Internet also require quantifying and attention based on best practices.
Through the use of several tools developed to measure risk and incorporate it into an effective equipment management program, the HTM department can best allocate resources during a period of substantial demands. The science will evolve and technology will improve to provide data and decision-making information readily available.
*By entering your email address, you agree to receive emails regarding TechNation Magazine, Webinars, and Exclusive Promos.
© 2021, TechNation Magazine. Site designed by MD Publishing, Inc.