jhayes14 / UAN

Universal Adversarial Networks
32 stars 6 forks source link

Hyper-parameters for training #3

Open YanghaoZYH opened 4 years ago

YanghaoZYH commented 4 years ago

Hi Jamie,

Thanks for releasing this code. I am trying to reproduce the result reported in your paper for comparison, could you provide other hyper-parameters (i.e. shrink, shrink_inc, optimize_on_success) for training UAN on Cifar10 and ImageNet, respectively?

Thanks in advanced.

jhayes14 commented 4 years ago

Sorry for the late reply. I probably have this saved somewhere, I'll try and find them. What results are you getting using the default values? I would also advise updating the attack model (to something like a wide resnet), as I found this to give improved results in general.

YanghaoZYH commented 4 years ago

Hi, I am trying to run the code on cifar10 with merely change default 'imageSize' to 32, but it can not converge totally, even nan occurs sometimes. I also found the resnet generator leads to better performance, while thanks for you kind advice. https://github.com/jhayes14/UAN/blob/3b60d63241954c36be1df0a7dfad9e845652de1a/main.py#L497-L503 For the code above, it seems that when breaking the loop, the obtained noise is not strictly in the constrained l_p norm ball for the following test?

Let me know if you can find the saved results! Many thanks.

jhayes14 commented 4 years ago

To be honest the code is currently a mess and needs to be refactored. You are right that it currently breaks when it should clamp within the main loop instead. Feel free to submit a PR, or I will re-write main.py when I get a day free.