Closed CNOCycle closed 4 years ago
Hey,
I think this depends on the PyTorch version, probably newer ones don't agree with how view
is used in the code. I guess both solutions are fine, but I'll investigate the details to fix it at best.
Yeah, the code is not always homogeneous, this will be object of future updates, thanks for pointing it out!
I'm not pytorch expert. Any update about this issues? Or which solution would you prefer? I can make a PR fix this issue quickly.
I fixed this with reshape(...)
. I've to test it with different pytorch versions (as this seems to be one of the causes). I plan to integrate it into the repo soon (a few days) together with other updates.
This should be fixed now in fab_tf.py
. Please let me know if you come across it again.
Thanks, this issues has been fixed. But fab attack's runtime performance is still poor. I guess that it would cause by computing gradient in TF side.
Hi authors,
I'm not familiar with pytorch, I got some errors occasionally.
It complaint that tensor is not non-contiguous and suggest that
reshape
is better thanview
.https://github.com/fra31/auto-attack/blob/0185c7930e5c535ff3380197c54c74ba916f449b/fab_tf.py#L394-L407
I also found that the code is not unified. As you seen, L397,L401 is
view
but L404 isreshape
.An alternative solution is calling
.contiguous()
before.view(...)
. Orview
should be replace byreshape
I'm not sure which solution is suitable in this project.
Any suggestion?