
By Nadia ElKaissi, CHTM
Picture this: It’s a routine October morning at the office and suddenly you see an alert in your email. It is a security bulletin referencing the RohanRush (RR) load balancer followed by an emergency notice from CISA! Within minutes, HTM leaders, security leaders, network engineers and infrastructure teams are all asking the same question.
“Do we have any of these systems in our environment – and if so, where?”
For HTM, the focus was more “Which clinical systems depend on RohanRush and what will happen if they go down?”
WHY THIS HIT HEALTHCARE DIFFERENTLY
Hospitals often rely on load balancers in places that are typically invisible to clinicians but critical to care delivery: load balancing for secure application delivery for imaging systems, patient portals, telehealth platform, and remote access for vendors and support staff.
When RR disclosed a security incident involving unauthorized access to internal systems and sensitive product information, the risk extended beyond known CVEs. Organizations had to assume attackers potentially gained deeper insight into more detailed information such as product architecture and exploit paths. That reality compressed response windows and raised the stakes for all that relied on RR devices. CISA’s involvement reinforced the message that this is not just an IT issue but is also an operational risk with patient safety implications.
Phase One: Asset Inventory Under Pressure
The first step was immediate but one of the more difficult things to accomplish quickly – identify all RR assets. In practice, this required HTM to:
1. Validate the medical device records that may be incomplete or outdated.
2. Scan networks for RR instances deployed.
3. Identify any cloud-based RR appliances.
Many healthcare organizations discovered the RR appliances had been deployed years earlier to support specific applications and never fully documented in a centralized inventory. Others found instanced embedded in vendor-managed solutions, where ownership and patch responsibility were unclear.
Phase Two: Assessing Exposure
Once the assets were identified, HTM had to answer a harder question: What breaks if we touch this?” For this, teams assessed:
• Whether management interfaces were reachable from the Internet
• Which clinical workflows depended on each RR instance
• Whether downtime could disrupt patient care, scheduling, imaging and/or medication workflow.
Unlike other industries, healthcare cannot always patch immediately without coordination. Maintenance windows must align with clinical operations, and even short outages can have cascading effects. This phase required risk-based decision-making and balancing cybersecurity urgency against continuity of care.
Phase Three: Coordinating Across Clinical, IT & Vendors
Effective response depended on coordination. HTM acted as translators between:
• Security teams focused on threat mitigation
• Clinical teams focused on stability and uptime
• Vendors responsible for clinical applications and device integrations
Clear communication was essential. Who owned the RR instance? Who applied the patch? Who validated functionality afterward? In many cases, this required contract review, vendor escalation and rapid coordination with clinical and leadership. Organizations with established vendor relationships and escalation paths moved faster and with less friction.
Phase Four: Tracking Remediation to Completion
In healthcare, “patched” is not sufficient. Verified and documented is the standard.
Teams tracked:
• Patch status by system and clinical impact
• Post-update validation with application owners
• Exceptions where immediate patching was not feasible, including compensating controls
This documentation supported internal governance, external audits and leadership assurance that patient-facing systems remained protected.

LESSONS LEARNED
The RR incident reinforced a critical reality: cybersecurity events increasingly intersect with patient care. Healthcare organizations that responded most effectively had already invested in:
• Accurate and continuously updated inventories
• Clear ownership models between IT, HTM and vendors
• Patch policies that prioritize edge and access infrastructure
• Segmentation and restricted management access by default
These practices didn’t eliminate risk, but they reduced uncertainty when decisions mattered most.
PREPARING FOR THE NEXT ADVISORY
There will be another alert. Another vendor. Another urgent directive.
HTM can prepare by:
• Maintaining a living inventory of infrastructure supporting clinical systems
• Understanding dependencies between network components and patient care workflows
• Establishing pre-approved emergency patch pathways
• Regularly reviewing vendor responsibilities for security updates
When the next 9:02 a.m. alert arrives, preparedness will determine whether the response feels like a crisis or a controlled operation. In healthcare, cybersecurity readiness is no longer just about protecting data. It’s about protecting care delivery itself.

