Doctors Struggle to Spot AI-Generated X-Rays, Raising Scam Risks
A new study from Mount Sinai’s Icahn School of Medicine shows that radiologists struggle to distinguish real X-rays from AI-generated fakes, even when they know synthetic images are included. In tests involving 264 images, accuracy varied widely, with some specialists performing barely above chance, and even advanced multimodal AI models failed to reliably detect the forgeries. Researchers warn that convincing deepfake medical images could enable fraudulent lawsuits, misdiagnoses, or even hospital network attacks if hackers inject synthetic scans. The team hopes the findings will drive the development of detection tools and training datasets to help clinicians identify subtle signs of AI manipulation.
Read the full story on Gizmodo →