psandovalsegura / pytorch-gd-uap

Generalized Data-free Universal Adversarial Perturbations in PyTorch
MIT License
17 stars 0 forks source link

The performance of this code #1

Closed qilong-zhang closed 3 years ago

qilong-zhang commented 3 years ago

Hi,

I implement this code and test on VGG-16, the result is about 85%. But the result from the original paper is only 63%. Could you help me? Why is this happening?

psandovalsegura commented 3 years ago

If you take a look at the tensorflow code, the authors provide weights for their VGG-16. I tried loading these weights in this implementation, but they negatively affected the VGG-16's accuracy on ImageNet.

So, to answer your question, I think the reason the UAPs from this repo are more effective is due to the pre-trained weights from pytorch. Let me know if that helps or if you find a different reason.

qilong-zhang commented 3 years ago

@psandovalsegura I have a question about the gduap.py, In line

 loss = -sum(list(map(lambda activation: torch.log(torch.sum(torch.square(activation)) / 2), activations)))

why torch.sum(torch.square(activation)) is divided by 2?

psandovalsegura commented 3 years ago

@qilong-zhang When I was writing this, I wanted it to be as similar as possible to the tensorflow code which uses tf.nn.l2_loss when calculating the loss. The documentation for this function says it "computes half the L2 norm of a tensor without the sqrt".

I agree that if you were to follow the paper exactly (Line 7 of Algorithm 1), it wouldn't be divided by 2. But it shouldn't make much of a difference.

qilong-zhang commented 3 years ago

@psandovalsegura Thanks for your code!