cleverhans-lab / cleverhans

An adversarial example library for constructing attacks, building defenses, and benchmarking both
MIT License
6.21k stars 1.39k forks source link

Fast gradient Method doesn't work for double precision: Suggestion for the fix #1224

Open williampiat3 opened 2 years ago

williampiat3 commented 2 years ago

The issue tracker should only be used to report bugs or feature requests. If you are looking for support from other library users, please ask a question on StackOverflow.

Describe the bug I used the PGD method in this folder but I use double precision in my models thus encountered an error while using the attack cleverhans/cleverhans/torch/attacks/projected_gradient_descent.py

I found a way of fixing the problem: line 74 in cleverhans/cleverhans/torch/attacks/fast_gradient_method.py instead of:

x = x.clone().detach().to(torch.float).requires_grad_(True)

I put

x = x.clone().detach().to(x.dtype).requires_grad_(True)

My solution then covers double precision and simple precision

kylematoba commented 2 years ago

this also breaks for torch.float16.