greenelab / deep-review

A collaboratively written review paper on deep learning, genomics, and precision medicine
https://greenelab.github.io/deep-review/
Other
1.24k stars 272 forks source link

Adversarial Attacks Against Medical Deep Learning Systems #864

Open nafizh opened 6 years ago

nafizh commented 6 years ago

The discovery of adversarial examples has raised concerns about the practical deployment of deep learning systems. In this paper, we argue that the field of medicine may be uniquely susceptible to adversarial attacks, both in terms of monetary incentives and technical vulnerability. To this end, we outline the healthcare economy and the incentives it creates for fraud, we extend adversarial attacks to three popular medical imaging tasks, and we provide concrete examples of how and why such attacks could be realistically carried out. For each of our representative medical deep learning classifiers, both white and black box attacks were both effective and human-imperceptible. We urge caution in employing deep learning systems in clinical settings, and encourage research into domain-specific defense strategies.

https://arxiv.org/abs/1804.05296

cgreene commented 6 years ago

Oo! Exciting! I have to read this, but I have been talking about the potential in this space for a while. This is one of my main caution points when I talk to health care audiences.

nafizh commented 6 years ago

@cgreene Yes, really exciting! I watched the section on this of your video you posted in another discussion. I have to admit, I had to be motivated to appreciate the urgency of adversarial attacks in medicine. This paper has a good motivation part regarding that.