
By Arleen Thukral, M.S., CCE, CHTM
Artificial intelligence is rapidly transforming the cybersecurity landscape across health systems. For healthcare technology management (HTM) professionals, AI introduces powerful new capabilities – advanced anomaly detection, autonomous device monitoring and riskbased prioritization – that help reduce response times and uncover subtle threats human analysts may never see. These tools allow cybersecurity teams to understand what changed, why it matters, and how to act quickly, reducing noise and sharpening decisionmaking. But while the benefits are substantial, the cybersecurity risks introduced by AI systems demand equal attention. Understanding those risks is now essential for every HTM leader responsible for safeguarding connected clinical environments.
Unlike traditional software, AI systems can learn from data, adapt to new inputs and can behave in ways that are difficult to predict or fully explain. This dynamic behavior introduces novel cybersecurity challenges that HTM professionals must now manage.
One of the most immediate concerns is the dramatically expanded attack surface created by AI integrations. AI tools draw data from multiple interconnected systems, external APIs, cloud-based inference service and model updates. Each component – whether a third-party library, data pipeline or remote model endpoint – represents a potential opening for exploitation.
Another significant category of risk revolves around prompt injection, especially in large language models (LLMs). LLMs can be manipulated through cleverly crafted input that overrides intended instructions. The risk is even greater in agentic AI browsers – tools capable of navigating websites and taking autonomous actions. In these scenarios, hidden malicious instructions embedded in a webpage can cause the AI assistant to navigate to a user’s logged-in accounts, extract sensitive information (such as email address or one-time password) and exfiltrate this data to an attacker’s server. These attacks succeed even when standard web security controls are functioning properly.
Another major category of AI-specific vulnerability involves poisoning the data used to train or update models. Toxic or manipulated training data can corrupt model outputs, leading to dangerous or clinically incorrect recommendations in healthcare AI systems. Similarly, federated learning models-which train on decentralized datasets are especially vulnerable because any participating endpoint can introduce hostile updates that skew or degrade the entire global model. In practical terms, this could mean a compromised medical device or untrusted external dataset silently influencing AI risk-scoring models, etc.
To prepare for this new frontier, HTM professionals should prioritize several key actions. First, strong governance frameworks must be established to oversee AI deployment, data use and model change control. Transparency and auditability are equally important; clinical AI tools must be explainable enough to evaluate their decisions and root out malicious or erroneous behavior. Strengthening patch management and life cycle oversight for AI-enabled medical devices is essential, as well as improving visibility into AI dependencies across networks. Workforce training on AI risk and vendor collaboration is also vital; securing AI requests coordinated efforts across manufacturers, integrators and health system cybersecurity teams. By understanding these risks and developing the skills and structures needed to manage them, HTM professionals can harness the benefits of AI while protecting the safety of clinical systems.
