I am working with Pytorch 1.13.1 and Python 3.10.12.
When using the Cleverhans CW attack for Pytorch, the attack script runs into 3 errors.
1) On line 108 of the attack py file:
const = x.new_ones(len(x), 1) * initial_const
The following error comes up:
TypeError: new_ones(): argument 'dtype' must be torch.dtype, not int
To solve this, I assumed the 1 was supposed to denote the dimension of the tensor, rather than a dtype, so I wrapped the function call in another set of parenthesis:
Which returns a non-leaf Tensor. I was thinking at some point maybe a version update changed this function to not return a leaf tensor, but regardless, I fixed it using Pytorch's zeros function, since the documentation of zeros_like described these functions as equivalent if you give them different parameters accordingly. The edited line looks like this:
RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.
To solve this, I just specified retain_graph=True:
loss.backward(retain_graph=True)
But I'm not sure it this is the most efficient fix...
While the solutions I implemented make the attack script run and seemingly work, I am not sure if maybe I am missing something or if I have unknowingly changed the code's functionality in some way. So I would really appreciate any feedback and/or guidance.
I am working with Pytorch 1.13.1 and Python 3.10.12.
When using the Cleverhans CW attack for Pytorch, the attack script runs into 3 errors.
1) On line 108 of the attack py file:
The following error comes up:
To solve this, I assumed the 1 was supposed to denote the dimension of the tensor, rather than a dtype, so I wrapped the function call in another set of parenthesis:
2) On line 134:
I get the following error:
This is due to line 123:
Which returns a non-leaf Tensor. I was thinking at some point maybe a version update changed this function to not return a leaf tensor, but regardless, I fixed it using Pytorch's zeros function, since the documentation of zeros_like described these functions as equivalent if you give them different parameters accordingly. The edited line looks like this:
I received this error:
To solve this, I just specified retain_graph=True:
But I'm not sure it this is the most efficient fix...
While the solutions I implemented make the attack script run and seemingly work, I am not sure if maybe I am missing something or if I have unknowingly changed the code's functionality in some way. So I would really appreciate any feedback and/or guidance.