Open amirhagai opened 3 years ago
Hi Amir,
Thanks for pointing this out. Indeed, it would be much better if the loss function is included in the function signature as done in the TensorFlow implementation. Would you like to send a PR addressing this?
Sure :)
Hey, @tejuafonja is this issue still open? I would like to work if that's the case
Hi @Kkuntal990, yes it is. Please feel free to send a PR anytime : )
Hello, I've implemented a fix for this here: https://github.com/cleverhans-lab/cleverhans/pull/1222
Hi! Thanks for that great library!
It seems like the torch implementation assumes that the given model had been trained with cross-entropy loss, line 80 at torch/attacks/fast_gradient_method.py.
In the TF version(as in the paper) one can specify the loss function, line 46 at tf2/attacks/fast_gradient_method.py and in the Function signature.
Thanks!