mathcbc / advGAN_pytorch

a Pytorch implementation of the paper "Generating Adversarial Examples with Adversarial Networks" (advGAN).
254 stars 65 forks source link

Generator loss doesn't converge #6

Open boom85423 opened 4 years ago

boom85423 commented 4 years ago

Basically, I love your idea but I found that the loss of generator doesn't converge but increase with iteration.

Sincerely, kevin

wangxiao5791509 commented 4 years ago

@boom85423 my results are:

(advsample) wangxiao@wx:~/Downloads/advGAN_pytorch$ python3 train_target_model.py CUDA Available: True loss in epoch 0: 236.767654 loss in epoch 1: 75.487640
loss in epoch 2: 55.602177
loss in epoch 3: 41.992401
loss in epoch 4: 35.540051
loss in epoch 5: 32.529961
loss in epoch 6: 27.164566
loss in epoch 7: 25.420187
loss in epoch 8: 21.742592
loss in epoch 9: 21.038944
loss in epoch 10: 17.624969
loss in epoch 11: 18.412670
loss in epoch 12: 15.708488
loss in epoch 13: 16.841516
loss in epoch 14: 13.482550
loss in epoch 15: 13.857362
loss in epoch 16: 13.124444
loss in epoch 17: 10.685254
loss in epoch 18: 11.013978
loss in epoch 19: 11.986994
loss in epoch 20: 5.749249
loss in epoch 21: 4.034207
loss in epoch 22: 2.693571
loss in epoch 23: 1.913211
loss in epoch 24: 2.116817 loss in epoch 25: 1.671541 loss in epoch 26: 1.352745 loss in epoch 27: 1.490222 loss in epoch 28: 1.396703 loss in epoch 29: 1.541867 loss in epoch 30: 1.228402 loss in epoch 31: 1.080579 loss in epoch 32: 0.851639 loss in epoch 33: 0.977648 loss in epoch 34: 0.665363 loss in epoch 35: 0.624104 loss in epoch 36: 0.727104 loss in epoch 37: 0.529303 loss in epoch 38: 1.012875 loss in epoch 39: 0.949940 accuracy in testing set: 0.994400

(advsample) wangxiao@wx:~/Downloads/advGAN_pytorch$ python3 main.py CUDA Available: True epoch 1: loss_D: 0.227, loss_G_fake: 0.511,
loss_perturb: 13.381, loss_adv: 36.250,

epoch 2: loss_D: 0.020, loss_G_fake: 0.843,
loss_perturb: 15.427, loss_adv: 10.569,

epoch 3: loss_D: 0.008, loss_G_fake: 0.908,
loss_perturb: 15.964, loss_adv: 5.158,

epoch 4: loss_D: 0.002, loss_G_fake: 0.944,
loss_perturb: 16.121, loss_adv: 3.829,

epoch 5: loss_D: 0.003, loss_G_fake: 0.949,
loss_perturb: 16.099, loss_adv: 2.874,

epoch 6: loss_D: 0.001, loss_G_fake: 0.963,
loss_perturb: 16.076, loss_adv: 2.688,

epoch 7: loss_D: 0.001, loss_G_fake: 0.968,
loss_perturb: 16.252, loss_adv: 2.432,

epoch 8: loss_D: 0.000, loss_G_fake: 0.980,
loss_perturb: 16.320, loss_adv: 2.289,

epoch 9: loss_D: 0.004, loss_G_fake: 0.965,
loss_perturb: 16.276, loss_adv: 2.138,

epoch 10: loss_D: 0.000, loss_G_fake: 0.979,
loss_perturb: 16.665, loss_adv: 2.203,

epoch 11: loss_D: 0.000, loss_G_fake: 0.983,
loss_perturb: 16.680, loss_adv: 1.739,

epoch 12: loss_D: 0.001, loss_G_fake: 0.984,
loss_perturb: 16.416, loss_adv: 1.688,

epoch 13: loss_D: 0.001, loss_G_fake: 0.980,
loss_perturb: 16.682, loss_adv: 1.745,

epoch 14: loss_D: 0.000, loss_G_fake: 0.988,
loss_perturb: 16.946, loss_adv: 1.790,

epoch 15: loss_D: 0.000, loss_G_fake: 0.990,
loss_perturb: 16.828, loss_adv: 1.522,

epoch 16: loss_D: 0.000, loss_G_fake: 0.990,
loss_perturb: 16.737, loss_adv: 1.440,

epoch 17: loss_D: 0.002, loss_G_fake: 0.982,
loss_perturb: 16.857, loss_adv: 1.367,

epoch 18: loss_D: 0.000, loss_G_fake: 0.987,
loss_perturb: 16.639, loss_adv: 1.173,

epoch 19: loss_D: 0.000, loss_G_fake: 0.989,
loss_perturb: 16.282, loss_adv: 1.129,

epoch 20: loss_D: 0.000, loss_G_fake: 0.991,
loss_perturb: 16.755, loss_adv: 1.448,

epoch 21: loss_D: 0.001, loss_G_fake: 0.988,
loss_perturb: 16.738, loss_adv: 1.036,

epoch 22: loss_D: 0.000, loss_G_fake: 0.992,
loss_perturb: 16.453, loss_adv: 1.015,

epoch 23: loss_D: 0.000, loss_G_fake: 0.994,
loss_perturb: 16.310, loss_adv: 1.123,

epoch 24: loss_D: 0.000, loss_G_fake: 0.995,
loss_perturb: 16.105, loss_adv: 0.893,

epoch 25: loss_D: 0.000, loss_G_fake: 0.993,
loss_perturb: 15.992, loss_adv: 0.928,

epoch 26: loss_D: 0.001, loss_G_fake: 0.992,
loss_perturb: 15.665, loss_adv: 0.911,

epoch 27: loss_D: 0.000, loss_G_fake: 0.993,
loss_perturb: 15.705, loss_adv: 0.837,

epoch 28: loss_D: 0.000, loss_G_fake: 0.995,
loss_perturb: 15.741, loss_adv: 0.905,

epoch 29: loss_D: 0.000, loss_G_fake: 0.997,
loss_perturb: 15.964, loss_adv: 0.967,

epoch 30: loss_D: 0.000, loss_G_fake: 0.997,
loss_perturb: 15.536, loss_adv: 0.755,

epoch 31: loss_D: 0.000, loss_G_fake: 0.997,
loss_perturb: 14.882, loss_adv: 0.587,

epoch 32: loss_D: 0.000, loss_G_fake: 0.997,
loss_perturb: 14.579, loss_adv: 0.679,

epoch 33: loss_D: 0.000, loss_G_fake: 0.996,
loss_perturb: 14.361, loss_adv: 0.833,

epoch 34: loss_D: 0.000, loss_G_fake: 0.995,
loss_perturb: 14.392, loss_adv: 0.580,

epoch 35: loss_D: 0.000, loss_G_fake: 0.994,
loss_perturb: 14.082, loss_adv: 0.682,

epoch 36: loss_D: 0.000, loss_G_fake: 0.997,
loss_perturb: 14.155, loss_adv: 0.686,

epoch 37: loss_D: 0.000, loss_G_fake: 0.997,
loss_perturb: 14.137, loss_adv: 0.626,

epoch 38: loss_D: 0.000, loss_G_fake: 0.998,
loss_perturb: 14.282, loss_adv: 0.566,

epoch 39: loss_D: 0.000, loss_G_fake: 0.998,
loss_perturb: 13.741, loss_adv: 0.613,

epoch 40: loss_D: 0.000, loss_G_fake: 0.996,
loss_perturb: 13.482, loss_adv: 0.563,

epoch 41: loss_D: 0.000, loss_G_fake: 0.996,
loss_perturb: 12.970, loss_adv: 0.547,

epoch 42: loss_D: 0.000, loss_G_fake: 0.997,
loss_perturb: 12.972, loss_adv: 0.505,

epoch 43: loss_D: 0.000, loss_G_fake: 0.998,
loss_perturb: 12.608, loss_adv: 0.464,

epoch 44: loss_D: 0.000, loss_G_fake: 0.998,
loss_perturb: 12.404, loss_adv: 0.496,

epoch 45: loss_D: 0.000, loss_G_fake: 0.998,
loss_perturb: 12.061, loss_adv: 0.519,

epoch 46: loss_D: 0.000, loss_G_fake: 0.998,
loss_perturb: 12.121, loss_adv: 0.496,

epoch 47: loss_D: 0.000, loss_G_fake: 0.998,
loss_perturb: 12.240, loss_adv: 0.448,

epoch 48: loss_D: 0.000, loss_G_fake: 0.999,
loss_perturb: 11.949, loss_adv: 0.413,

epoch 49: loss_D: 0.001, loss_G_fake: 0.995,
loss_perturb: 11.400, loss_adv: 0.346,

epoch 50: loss_D: 0.000, loss_G_fake: 0.997,
loss_perturb: 11.101, loss_adv: 0.274,

epoch 51: loss_D: 0.000, loss_G_fake: 0.998,
loss_perturb: 10.560, loss_adv: 0.213,

epoch 52: loss_D: 0.000, loss_G_fake: 0.998,
loss_perturb: 10.100, loss_adv: 0.181,

epoch 53: loss_D: 0.000, loss_G_fake: 0.998,
loss_perturb: 9.611, loss_adv: 0.176,

epoch 54: loss_D: 0.000, loss_G_fake: 0.998,
loss_perturb: 9.142, loss_adv: 0.161,

epoch 55: loss_D: 0.000, loss_G_fake: 0.998,
loss_perturb: 8.897, loss_adv: 0.195,

epoch 56: loss_D: 0.000, loss_G_fake: 0.998,
loss_perturb: 8.707, loss_adv: 0.170,

epoch 57: loss_D: 0.000, loss_G_fake: 0.999,
loss_perturb: 8.368, loss_adv: 0.154,

epoch 58: loss_D: 0.000, loss_G_fake: 0.999,
loss_perturb: 8.153, loss_adv: 0.157,

epoch 59: loss_D: 0.000, loss_G_fake: 0.999,
loss_perturb: 7.972, loss_adv: 0.146,

epoch 60: loss_D: 0.000, loss_G_fake: 0.999,
loss_perturb: 7.753, loss_adv: 0.135,

(advsample) wangxiao@wx:~/Downloads/advGAN_pytorch$ python3 test_adversarial_examples.py CUDA Available: True MNIST training dataset: num_correct: 162 accuracy of adv imgs in training set: 0.002700

num_correct: 44 accuracy of adv imgs in testing set: 0.004400

similar with yours ??

javaerydl commented 1 year ago

@boom85423 my results are:

(advsample) wangxiao@wx:~/Downloads/advGAN_pytorch$ python3 train_target_model.py CUDA Available: True loss in epoch 0: 236.767654 loss in epoch 1: 75.487640 loss in epoch 2: 55.602177 loss in epoch 3: 41.992401 loss in epoch 4: 35.540051 loss in epoch 5: 32.529961 loss in epoch 6: 27.164566 loss in epoch 7: 25.420187 loss in epoch 8: 21.742592 loss in epoch 9: 21.038944 loss in epoch 10: 17.624969 loss in epoch 11: 18.412670 loss in epoch 12: 15.708488 loss in epoch 13: 16.841516 loss in epoch 14: 13.482550 loss in epoch 15: 13.857362 loss in epoch 16: 13.124444 loss in epoch 17: 10.685254 loss in epoch 18: 11.013978 loss in epoch 19: 11.986994 loss in epoch 20: 5.749249 loss in epoch 21: 4.034207 loss in epoch 22: 2.693571 loss in epoch 23: 1.913211 loss in epoch 24: 2.116817 loss in epoch 25: 1.671541 loss in epoch 26: 1.352745 loss in epoch 27: 1.490222 loss in epoch 28: 1.396703 loss in epoch 29: 1.541867 loss in epoch 30: 1.228402 loss in epoch 31: 1.080579 loss in epoch 32: 0.851639 loss in epoch 33: 0.977648 loss in epoch 34: 0.665363 loss in epoch 35: 0.624104 loss in epoch 36: 0.727104 loss in epoch 37: 0.529303 loss in epoch 38: 1.012875 loss in epoch 39: 0.949940 accuracy in testing set: 0.994400

(advsample) wangxiao@wx:~/Downloads/advGAN_pytorch$ python3 main.py CUDA Available: True epoch 1: loss_D: 0.227, loss_G_fake: 0.511, loss_perturb: 13.381, loss_adv: 36.250,

epoch 2: loss_D: 0.020, loss_G_fake: 0.843, loss_perturb: 15.427, loss_adv: 10.569,

epoch 3: loss_D: 0.008, loss_G_fake: 0.908, loss_perturb: 15.964, loss_adv: 5.158,

epoch 4: loss_D: 0.002, loss_G_fake: 0.944, loss_perturb: 16.121, loss_adv: 3.829,

epoch 5: loss_D: 0.003, loss_G_fake: 0.949, loss_perturb: 16.099, loss_adv: 2.874,

epoch 6: loss_D: 0.001, loss_G_fake: 0.963, loss_perturb: 16.076, loss_adv: 2.688,

epoch 7: loss_D: 0.001, loss_G_fake: 0.968, loss_perturb: 16.252, loss_adv: 2.432,

epoch 8: loss_D: 0.000, loss_G_fake: 0.980, loss_perturb: 16.320, loss_adv: 2.289,

epoch 9: loss_D: 0.004, loss_G_fake: 0.965, loss_perturb: 16.276, loss_adv: 2.138,

epoch 10: loss_D: 0.000, loss_G_fake: 0.979, loss_perturb: 16.665, loss_adv: 2.203,

epoch 11: loss_D: 0.000, loss_G_fake: 0.983, loss_perturb: 16.680, loss_adv: 1.739,

epoch 12: loss_D: 0.001, loss_G_fake: 0.984, loss_perturb: 16.416, loss_adv: 1.688,

epoch 13: loss_D: 0.001, loss_G_fake: 0.980, loss_perturb: 16.682, loss_adv: 1.745,

epoch 14: loss_D: 0.000, loss_G_fake: 0.988, loss_perturb: 16.946, loss_adv: 1.790,

epoch 15: loss_D: 0.000, loss_G_fake: 0.990, loss_perturb: 16.828, loss_adv: 1.522,

epoch 16: loss_D: 0.000, loss_G_fake: 0.990, loss_perturb: 16.737, loss_adv: 1.440,

epoch 17: loss_D: 0.002, loss_G_fake: 0.982, loss_perturb: 16.857, loss_adv: 1.367,

epoch 18: loss_D: 0.000, loss_G_fake: 0.987, loss_perturb: 16.639, loss_adv: 1.173,

epoch 19: loss_D: 0.000, loss_G_fake: 0.989, loss_perturb: 16.282, loss_adv: 1.129,

epoch 20: loss_D: 0.000, loss_G_fake: 0.991, loss_perturb: 16.755, loss_adv: 1.448,

epoch 21: loss_D: 0.001, loss_G_fake: 0.988, loss_perturb: 16.738, loss_adv: 1.036,

epoch 22: loss_D: 0.000, loss_G_fake: 0.992, loss_perturb: 16.453, loss_adv: 1.015,

epoch 23: loss_D: 0.000, loss_G_fake: 0.994, loss_perturb: 16.310, loss_adv: 1.123,

epoch 24: loss_D: 0.000, loss_G_fake: 0.995, loss_perturb: 16.105, loss_adv: 0.893,

epoch 25: loss_D: 0.000, loss_G_fake: 0.993, loss_perturb: 15.992, loss_adv: 0.928,

epoch 26: loss_D: 0.001, loss_G_fake: 0.992, loss_perturb: 15.665, loss_adv: 0.911,

epoch 27: loss_D: 0.000, loss_G_fake: 0.993, loss_perturb: 15.705, loss_adv: 0.837,

epoch 28: loss_D: 0.000, loss_G_fake: 0.995, loss_perturb: 15.741, loss_adv: 0.905,

epoch 29: loss_D: 0.000, loss_G_fake: 0.997, loss_perturb: 15.964, loss_adv: 0.967,

epoch 30: loss_D: 0.000, loss_G_fake: 0.997, loss_perturb: 15.536, loss_adv: 0.755,

epoch 31: loss_D: 0.000, loss_G_fake: 0.997, loss_perturb: 14.882, loss_adv: 0.587,

epoch 32: loss_D: 0.000, loss_G_fake: 0.997, loss_perturb: 14.579, loss_adv: 0.679,

epoch 33: loss_D: 0.000, loss_G_fake: 0.996, loss_perturb: 14.361, loss_adv: 0.833,

epoch 34: loss_D: 0.000, loss_G_fake: 0.995, loss_perturb: 14.392, loss_adv: 0.580,

epoch 35: loss_D: 0.000, loss_G_fake: 0.994, loss_perturb: 14.082, loss_adv: 0.682,

epoch 36: loss_D: 0.000, loss_G_fake: 0.997, loss_perturb: 14.155, loss_adv: 0.686,

epoch 37: loss_D: 0.000, loss_G_fake: 0.997, loss_perturb: 14.137, loss_adv: 0.626,

epoch 38: loss_D: 0.000, loss_G_fake: 0.998, loss_perturb: 14.282, loss_adv: 0.566,

epoch 39: loss_D: 0.000, loss_G_fake: 0.998, loss_perturb: 13.741, loss_adv: 0.613,

epoch 40: loss_D: 0.000, loss_G_fake: 0.996, loss_perturb: 13.482, loss_adv: 0.563,

epoch 41: loss_D: 0.000, loss_G_fake: 0.996, loss_perturb: 12.970, loss_adv: 0.547,

epoch 42: loss_D: 0.000, loss_G_fake: 0.997, loss_perturb: 12.972, loss_adv: 0.505,

epoch 43: loss_D: 0.000, loss_G_fake: 0.998, loss_perturb: 12.608, loss_adv: 0.464,

epoch 44: loss_D: 0.000, loss_G_fake: 0.998, loss_perturb: 12.404, loss_adv: 0.496,

epoch 45: loss_D: 0.000, loss_G_fake: 0.998, loss_perturb: 12.061, loss_adv: 0.519,

epoch 46: loss_D: 0.000, loss_G_fake: 0.998, loss_perturb: 12.121, loss_adv: 0.496,

epoch 47: loss_D: 0.000, loss_G_fake: 0.998, loss_perturb: 12.240, loss_adv: 0.448,

epoch 48: loss_D: 0.000, loss_G_fake: 0.999, loss_perturb: 11.949, loss_adv: 0.413,

epoch 49: loss_D: 0.001, loss_G_fake: 0.995, loss_perturb: 11.400, loss_adv: 0.346,

epoch 50: loss_D: 0.000, loss_G_fake: 0.997, loss_perturb: 11.101, loss_adv: 0.274,

epoch 51: loss_D: 0.000, loss_G_fake: 0.998, loss_perturb: 10.560, loss_adv: 0.213,

epoch 52: loss_D: 0.000, loss_G_fake: 0.998, loss_perturb: 10.100, loss_adv: 0.181,

epoch 53: loss_D: 0.000, loss_G_fake: 0.998, loss_perturb: 9.611, loss_adv: 0.176,

epoch 54: loss_D: 0.000, loss_G_fake: 0.998, loss_perturb: 9.142, loss_adv: 0.161,

epoch 55: loss_D: 0.000, loss_G_fake: 0.998, loss_perturb: 8.897, loss_adv: 0.195,

epoch 56: loss_D: 0.000, loss_G_fake: 0.998, loss_perturb: 8.707, loss_adv: 0.170,

epoch 57: loss_D: 0.000, loss_G_fake: 0.999, loss_perturb: 8.368, loss_adv: 0.154,

epoch 58: loss_D: 0.000, loss_G_fake: 0.999, loss_perturb: 8.153, loss_adv: 0.157,

epoch 59: loss_D: 0.000, loss_G_fake: 0.999, loss_perturb: 7.972, loss_adv: 0.146,

epoch 60: loss_D: 0.000, loss_G_fake: 0.999, loss_perturb: 7.753, loss_adv: 0.135,

(advsample) wangxiao@wx:~/Downloads/advGAN_pytorch$ python3 test_adversarial_examples.py CUDA Available: True MNIST training dataset: num_correct: 162 accuracy of adv imgs in training set: 0.002700

num_correct: 44 accuracy of adv imgs in testing set: 0.004400

similar with yours ??

哥们,我的跑不起起来,说与训练模型找不到。。不知道哪出问题了,求帮助 image

ha2ha2 commented 1 year ago

I have the same problem

bieyl commented 2 months ago

@ javaerydl@ha2ha2 Hello! Your question is do you want to use the model trained with the code in your provided? If yes, is it solved, please?

javaerydl commented 2 months ago

@ javaerydl@ha2ha2 Hello! Your question is do you want to use the model trained with the code in your provided? If yes, is it solved, please?

i have solved,thank you

bieyl commented 2 months ago

@wangxiao5791509 According to your running result, why are the results of loss_G_fakes so big? Whether it is reasonable or not in the logic? epoch 55: loss_D: 0.000, loss_G_fake: 0.998, loss_perturb: 8.897, loss_adv: 0.195,

epoch 56: loss_D: 0.000, loss_G_fake: 0.998, loss_perturb: 8.707, loss_adv: 0.170,

epoch 57: loss_D: 0.000, loss_G_fake: 0.999, loss_perturb: 8.368, loss_adv: 0.154,

epoch 58: loss_D: 0.000, loss_G_fake: 0.999, loss_perturb: 8.153, loss_adv: 0.157,

epoch 59: loss_D: 0.000, loss_G_fake: 0.999, loss_perturb: 7.972, loss_adv: 0.146,

epoch 60: loss_D: 0.000, loss_G_fake: 0.999, loss_perturb: 7.753, loss_adv: 0.135,