AI in Healthcare Is Booming -So Are the Cyber Risks
The rise of AI in healthcare is transforming diagnostics, documentation, and decision-making. Radiologists use AI to prioritize critical scans. Chatbots schedule appointments and answer patient questions. Predictive algorithms flag patients at risk for readmission. But while the potential is enormous, so are the cybersecurity risks and they’re often overlooked.
Unlike traditional software, AI systems rely on training data often including sensitive patient information. If that data is exposed or manipulated, it can compromise not just privacy but outcomes. Adversarial attacks can subtly alter inputs (like a medical image) to fool an algorithm into misclassifying it. In one 2020 study, researchers found that adding imperceptible noise to CT scans could trick AI into missing 95% of tumors [1].
AI models are also increasingly cloud-based or integrated via third-party APIs, meaning patient data flows through systems outside your direct control. According to the Ponemon Institute, 63% of healthcare organizations don’t fully track how AI models interact with protected health information (PHI) [2].
Then there’s the human factor. As clinicians and staff rely on AI suggestions, the risk of automation bias grows, where flawed AI outputs are accepted without scrutiny, especially under pressure. A hacked or manipulated model could quietly erode clinical judgment.
So what’s the solution?
Before adopting AI tools, practices should conduct risk assessments tailored to AI systems:
Where is the data stored?
How is the model trained?
Can you verify outputs?
Who has access to the model and its infrastructure?
At Jourdain Risk Group, we integrate AI risk into our threat modeling frameworks using Bayesian tools that map dependencies between human users, data sources, and algorithm behavior. We help practices assess the true cost of failure, not just the promise of efficiency.
AI can save lives—but only if it’s secure, transparent, and trustworthy.
Sources:
[1] Finlayson SG, et al. Adversarial attacks on medical machine learning. Science, 2020
[2] Ponemon Institute. The State of AI and Machine Learning in Healthcare Security, 2023