THREAT PROFILE · HEALTHCARE
AI Threats Targeting Healthcare
MITRE ATLAS v5.6.0 techniques targeting healthcare AI — PHI exfiltration, training-data poisoning, hallucinated clinical content, agentic prescription tool compromise — mapped to the AISS sub-controls that mitigate each.
Why Healthcare AI is a Distinct Threat Surface
Healthcare AI operates under a unique combination of constraints that no other vertical shares all at once: regulated PHI on every inference call (HIPAA), strict-liability clinical-decision exposure, life-critical workflows where hallucinations cause real harm, and a regulatory environment that treats AI as a medical device under specific conditions (FDA Software-as-a-Medical-Device guidance).
The threats below are not theoretical. PHI exfiltration through LLM data leakage has been documented in 2024-2026 peer-reviewed research. Clinical RAG poisoning has been demonstrated against production EHR-integrated assistants. Adversarial inputs against radiology models have been shown to flip diagnostic outputs with sub-pixel perturbations invisible to humans. Healthcare AI security cannot rely on generic LLM safeguards.
AISS maps every healthcare-relevant ATLAS technique to specific sub-controls — most commonly in AC-3 (Data Protection), AC-5 (Supply Chain), AC-6 (Output Validation), and AC-10 (Model Security). Running an AISS assessment against your healthcare AI gives you a domain-specific scorecard with an audit-grade derivation, not a generic compliance checkmark.
ATLAS TECHNIQUES
Top AI threats in this vertical
Drawn from MITRE ATLAS v5.6.0, contextualized for the threat surface this vertical actually faces. Each entry lists the AISS sub-controls that mitigate it — so the assessment-to-mitigation path is auditable end-to-end.
Exfiltration via AI Inference API
Clinical inference APIs handle PHI on every call. An attacker with valid API credentials — or a misconfigured public endpoint — can extract patient identifiers, diagnoses, and free-text clinical notes through normal model queries. HIPAA's minimum-necessary rule does not apply at the model layer unless explicitly enforced.
Mitigated by
LLM Data Leakage
LLMs fine-tuned on clinical corpora memorize rare phrases — patient names, MRNs, dictated note fragments. A user asking even mundane questions can elicit verbatim PHI from another patient's record if training was not properly deduplicated and filtered. Differential privacy is rarely applied in clinical fine-tuning.
Mitigated by
Poison Training Data
Adversaries can submit subtly mislabeled images, manipulated clinical notes, or doctored research papers into datasets used for fine-tuning diagnostic models. A pathology model trained on 0.5% poisoned slides may quietly underdiagnose a target condition. Detection requires upstream lineage controls.
Mitigated by
RAG Poisoning
Clinical RAG systems pull from internal guidelines, formularies, and external sources like UpToDate or PubMed. An attacker injecting one false entry into the index — via compromised source ingestion or weak admin access — can shape the model's clinical recommendations for every subsequent user, persistently.
Mitigated by
Publish Hallucinated Entities
LLMs hallucinate plausible-sounding citations and drug interactions. In healthcare this is not a UX problem — it is a malpractice risk. Hallucinated guidance acted on by clinicians or pasted into discharge summaries creates documented chains of harm.
Mitigated by
Evade AI Model
Adversarial inputs designed to fool diagnostic models — subtle pixel perturbations on radiology images, character substitutions in clinical text — have been demonstrated in peer-reviewed research against deployed medical AI systems. Detection rates fall below 60% without dedicated robustness testing.
Mitigated by
LLM Prompt Injection
Patient-facing triage chatbots and physician-facing scribes accept arbitrary text. A patient including jailbreak strings in a symptom description — or an EHR-pasted note containing injected instructions from prior summarization — can redirect the model's behavior, leak prompts, or trigger tool calls.
Mitigated by
Generate Deepfakes
Synthetic voice clones of physicians can bypass weak call-center authentication to request prescription refills, demand records release, or initiate identity theft. Voice biometrics that worked on call-center fraud detection in 2022 are now defeated by 3-second sample clones.
Mitigated by
AI Supply Chain Compromise
Healthcare organizations rarely train models from scratch — they fine-tune from foundation models pulled from public hubs (Hugging Face, model marketplaces) and deploy them via vendor SaaS. A compromised pre-trained model with embedded backdoor triggers ships unchanged into the clinical environment.
Mitigated by
AI Agent Tool Credential Harvesting
Agentic clinical assistants chain tools: EHR query, lab order, prescription system, pharmacy interface. Each tool requires credentials handled inside the agent runtime. Prompt-injected agents have been demonstrated extracting service-account credentials directly from agent configuration in benign-looking conversations.
Mitigated by
Assess your AI against these threats
An AISS assessment scores your organization on the AISS sub-controls that mitigate each ATLAS technique in this profile — and shows you the gaps, with audit-grade transparency.
Or browse other verticals at /threats
