Closed admk closed 3 years ago
Currently we are using a work-around hack that subtracts epsilon with the machine epsilon of 1: https://github.com/admk/TRADES/blob/master/convert.py#L8.
Hi, thanks for bringing it to our attention. Did you try other versions of PyTorch? Our code is based on version 0.4.1 and we did not encounter the same issue before.
We were trying to evaluate our attack with the CIFAR-10 model. This is our script to convert saved images to a
.npy
file: https://github.com/admk/TRADES/blob/master/convert.pyWe are using the same
xadv = torch.clamp(xadv - x, -epsilon, epsilon) + x
as in https://github.com/yaodongyu/TRADES/blob/master/pgd_attack_cifar10.py#L76 to guarantee the boundaries, but it didn't work for us because of floating-point rounding errors:Do you know how we can reliably
torch.clamp
the ranges for your checks?Update: PyTorch==1.7.0, CPU and GPU gave different magnitudes of rounding errors.