Hi! Thanks for your work.
In generate_regularized_class_specific_samples.py as a regularizer you clip the gradient norm large values. However, you are clipping the cnn parameters with this line:
torch.nn.utils.clip_grad_norm(self.model.parameters(), clipping_value)
Although this helps (because you update the image with clipped gradients from the model), the usual practice is clipping the generated image values. So shouldn't this be: torch.nn.utils.clip_grad_norm(self.processed_image, clipping_value)
And same thing for .zero_grad i guess when using self.model.zero_grad(). The accumulated image gradients are not zeroed out
Hi! Thanks for your work. In
generate_regularized_class_specific_samples.py
as a regularizer you clip the gradient norm large values. However, you are clipping the cnn parameters with this line:torch.nn.utils.clip_grad_norm(self.model.parameters(), clipping_value)
Although this helps (because you update the image with clipped gradients from the model), the usual practice is clipping the generated image values. So shouldn't this be:torch.nn.utils.clip_grad_norm(self.processed_image, clipping_value)
And same thing for
.zero_grad
i guess when usingself.model.zero_grad()
. The accumulated image gradients are not zeroed outThanks