Tag: Clinical AI Security
Prompt injection, RAG poisoning, adversarial attacks on clinical LLMs, EHR security, medical AI vulnerabilities
-
FDA Clearance for AI Medical Devices: What 510(k), De Novo, and PMA Actually Mean
The FDA has cleared 700+ AI medical devices through 510(k), De Novo, and PMA pathways. A March 2026 European Radiology review documents how the EU AI Act, FDA…
-
Poisoning the Medical Brain: RAG Attacks and Security in Clinical AI Systems
Clinical LLMs failed prompt injection at 94% in JAMA testing. RAG systems face a harder attack: poisoned retrieved documents that the LLM cannot distinguish from legitimate sources. How…
-
Radiology Foundation Models: What Merlin, the 22% Hallucination Rate, and ED Fracture Data Tell Us
Stanford published Merlin in Nature: a CT foundation model tested on 44,098 scans across 3 institutions. Meanwhile 22% of AI radiology reports contain factual errors and LLMs miss…
-
Poisoning the Medical Brain: How RAG Attacks Corrupt Biomedical AI
When the knowledge base is the attack surface. RAG poisoning allows adversaries to redirect medical AI outputs without touching model weights. Five arXiv papers explain the mechanism and…
-
Prompt Injection Succeeds 94% of the Time Against Clinical LLMs
A JAMA Network Open study found prompt injection attacks succeed 94.4% of the time against clinical LLMs, including 91.7% in high-harm pregnancy drug scenarios. Based on PubMed-indexed research,…




You must be logged in to post a comment.