yaodongyu / TRADES

TRADES (TRadeoff-inspired Adversarial DEfense via Surrogate-loss minimization)
MIT License
518 stars 123 forks source link

Nasty floating-point rounding errors #20

Closed admk closed 3 years ago

admk commented 3 years ago

We were trying to evaluate our attack with the CIFAR-10 model. This is our script to convert saved images to a .npy file: https://github.com/admk/TRADES/blob/master/convert.py

We are using the same xadv = torch.clamp(xadv - x, -epsilon, epsilon) + x as in https://github.com/yaodongyu/TRADES/blob/master/pgd_attack_cifar10.py#L76 to guarantee the boundaries, but it didn't work for us because of floating-point rounding errors: 33911605766456_ pic

Do you know how we can reliably torch.clamp the ranges for your checks?

Update: PyTorch==1.7.0, CPU and GPU gave different magnitudes of rounding errors.

admk commented 3 years ago

Currently we are using a work-around hack that subtracts epsilon with the machine epsilon of 1: https://github.com/admk/TRADES/blob/master/convert.py#L8.

hongyanz commented 3 years ago

Hi, thanks for bringing it to our attention. Did you try other versions of PyTorch? Our code is based on version 0.4.1 and we did not encounter the same issue before.