jubueche / Resnet32-ICLR

1 stars 0 forks source link

Inconsistent results #1

Open houyaoqi17 opened 8 months ago

houyaoqi17 commented 8 months ago

Hello, thank you very much for releasing the code for reference. But when I replicated your experiment, I didn't get consistent results (my test accuracy was 89.84, compared to 91.12 in the code you provided). And the loss value I reproduce will be smaller than what is given in the code, and converge faster. Have the experimental conditions changed in the latest code, and how can I set them to get a relatively consistent result?

logs in the code:Resnet32-ICLR/Resources/Logs/5387663951.log at master · jubueche/Resnet32-ICLR · GitHub Loaded pretrained model from /ibm/gpfs-homes/jbu/Master-Thesis/Resources/cifar10_pretrained_models/resnet32.th Files already downloaded and verified Files already downloaded and verified Files already downloaded and verified /u/jbu/.conda/envs/msc/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:136: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)

  • Prec@1 99.440
  • Prec@1 14.440
  • Prec@1 14.460
  • Prec@1 13.320
  • Prec@1 15.740
  • Noisy Prec@1 14.51 After loading, with clipping@2.00 14.44 w/o 99.44 noisy: 14.51pm1.21 current lr 2.00000e-02 Epoch: [60][0/176] Time 1.032 (1.032) Data 0.519 (0.519) Loss 1.2845 (1.2845) Prec@1 76.953 (76.953) Epoch: [60][17/176] Time 0.463 (0.498) Data 0.001 (0.030) Loss 0.9598 (1.0205) Prec@1 76.953 (80.490) Epoch: [60][34/176] Time 0.473 (0.488) Data 0.001 (0.016) Loss 0.7782 (0.9444) Prec@1 78.906 (80.234) Epoch: [60][51/176] Time 0.463 (0.483) Data 0.001 (0.011) Loss 0.6287 (0.8837) Prec@1 89.062 (81.438) Epoch: [60][68/176] Time 0.466 (0.480) Data 0.001 (0.009) Loss 0.6493 (0.8335) Prec@1 85.938 (82.779) Epoch: [60][85/176] Time 0.465 (0.478) Data 0.001 (0.007) Loss 0.6132 (0.7895) Prec@1 84.375 (83.598) Epoch: [60][102/176] Time 0.470 (0.477) Data 0.001 (0.006) Loss 0.6502 (0.7627) Prec@1 88.281 (84.166) Epoch: [60][119/176] Time 0.466 (0.476) Data 0.001 (0.005) Loss 0.6334 (0.7413) Prec@1 88.281 (84.626) Epoch: [60][136/176] Time 0.464 (0.475) Data 0.001 (0.005) Loss 0.6120 (0.7264) Prec@1 87.500 (85.076) Epoch: [60][153/176] Time 0.466 (0.475) Data 0.001 (0.005) Loss 0.4708 (0.7124) Prec@1 87.500 (85.382) Epoch: [60][170/176] Time 0.459 (0.475) Data 0.000 (0.004) Loss 0.7082 (0.7008) Prec@1 85.547 (85.609)
  • Prec@1 84.740
  • Prec@1 77.120
  • Prec@1 80.720
  • Prec@1 78.060
  • Noisy Prec@1 78.63
    • New best: 78.63333

My logs: Loaded pretrained model from /home/houyq/code/temp/Resnet32-ICLR-master/Resources/cifar10_pretrained_models/resnet32.th Files already downloaded and verified Files already downloaded and verified Files already downloaded and verified Use AdversarialLoss /opt/conda/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:139: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of lr_scheduler.step() before optimizer.step(). "

jubueche commented 8 months ago

Hi,

Is the code deterministic? I remember that this was for the rebuttal so we quickly uploaded the code afterwards. Maybe we forgot to set a seed. But maybe we also used a different commit hash.

All the best, Julian


From: houyaoqi17 @.> Sent: Tuesday, December 12, 2023 8:50:26 AM To: jubueche/Resnet32-ICLR @.> Cc: Subscribed @.***> Subject: [EXTERNAL] [jubueche/Resnet32-ICLR] Inconsistent results (Issue #1)

Hello, thank you very much for releasing the code for reference. But when I replicated your experiment, I didn't get consistent results (my test accuracy was 89. 84, compared to 91. 12 in the code you provided). And the loss value I reproduce ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. https://us-phishalarm-ewt.proofpoint.com/EWT/v1/PjiDSg!2m-qihQ34BPxsIZXkikuRDfi2l6SWY4MphNbqUZInxZAF0tQtczis_K7p94NDBvbjY7_ffDTDxcX8dG1-jBz-15OT0s$ Report Suspicious

ZjQcmQRYFpfptBannerEnd

Hello, thank you very much for releasing the code for reference. But when I replicated your experiment, I didn't get consistent results (my test accuracy was 89.84, compared to 91.12 in the code you provided). And the loss value I reproduce will be smaller than what is given in the code, and converge faster. Have the experimental conditions changed in the latest code, and how can I set them to get a relatively consistent result?

logs in the code:Resnet32-ICLR/Resources/Logs/5387663951.log at master · jubueche/Resnet32-ICLR · GitHubhttps://github.com/jubueche/Resnet32-ICLR/blob/master/Resources/Logs/5387663951.log Loaded pretrained model from /ibm/gpfs-homes/jbu/Master-Thesis/Resources/cifar10_pretrained_models/resnet32.th Files already downloaded and verified Files already downloaded and verified Files already downloaded and verified /u/jbu/.conda/envs/msc/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:136: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-ratehttps://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-ratehttps://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)

My logs: Loaded pretrained model from /home/houyq/code/temp/Resnet32-ICLR-master/Resources/cifar10_pretrained_models/resnet32.th Files already downloaded and verified Files already downloaded and verified Files already downloaded and verified Use AdversarialLoss /opt/conda/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:139: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-ratehttps://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of lr_scheduler.step() before optimizer.step(). "

— Reply to this email directly, view it on GitHubhttps://github.com/jubueche/Resnet32-ICLR/issues/1, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AHK2FWNBXHKZM5FJKN2S343YJAEEFAVCNFSM6AAAAABARAGT7CVHI2DSMVQWIX3LMV43ASLTON2WKOZSGAZTOMJXHAYTOOA. You are receiving this because you are subscribed to this thread.Message ID: @.***>

houyaoqi17 commented 8 months ago

Hi, Is the code deterministic? I remember that this was for the rebuttal so we quickly uploaded the code afterwards. Maybe we forgot to set a seed. But maybe we also used a different commit hash. All the best, Julian ____ From: houyaoqi17 @.> Sent: Tuesday, December 12, 2023 8:50:26 AM To: jubueche/Resnet32-ICLR @.> Cc: Subscribed @.> Subject: [EXTERNAL] [jubueche/Resnet32-ICLR] Inconsistent results (Issue #1) Hello, thank you very much for releasing the code for reference. But when I replicated your experiment, I didn't get consistent results (my test accuracy was 89. 84, compared to 91. 12 in the code you provided). And the loss value I reproduce ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. https://us-phishalarm-ewt.proofpoint.com/EWT/v1/PjiDSg!2m-qihQ34BPxsIZXkikuRDfi2l6SWY4MphNbqUZInxZAF0tQtczis_K7p94NDBvbjY7_ffDTDxcX8dG1-jBz-15OT0s$ Report Suspicious ZjQcmQRYFpfptBannerEnd Hello, thank you very much for releasing the code for reference. But when I replicated your experiment, I didn't get consistent results (my test accuracy was 89.84, compared to 91.12 in the code you provided). And the loss value I reproduce will be smaller than what is given in the code, and converge faster. Have the experimental conditions changed in the latest code, and how can I set them to get a relatively consistent result? logs in the code:Resnet32-ICLR/Resources/Logs/5387663951.log at master · jubueche/Resnet32-ICLR · GitHubhttps://github.com/jubueche/Resnet32-ICLR/blob/master/Resources/Logs/5387663951.log Loaded pretrained model from /ibm/gpfs-homes/jbu/Master-Thesis/Resources/cifar10_pretrained_models/resnet32.th Files already downloaded and verified Files already downloaded and verified Files already downloaded and verified /u/jbu/.conda/envs/msc/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:136: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-ratehttps://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-ratehttps://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning) **@. 99.440 **@. 14.440 **@. 14.460 **@. 13.320 **@. 15.740 Noisy **@. 14.51 After loading, with @. 14.44 w/o 99.44 noisy: 14.51pm1.21 current lr 2.00000e-02 Epoch: [60][0/176] Time 1.032 (1.032) Data 0.519 (0.519) Loss 1.2845 (1.2845) @. 76.953 (76.953) Epoch: [60][17/176] Time 0.463 (0.498) Data 0.001 (0.030) Loss 0.9598 (1.0205) @. 76.953 (80.490) Epoch: [60][34/176] Time 0.473 (0.488) Data 0.001 (0.016) Loss 0.7782 (0.9444) @. 78.906 (80.234) Epoch: [60][51/176] Time 0.463 (0.483) Data 0.001 (0.011) Loss 0.6287 (0.8837) @. 89.062 (81.438) Epoch: [60][68/176] Time 0.466 (0.480) Data 0.001 (0.009) Loss 0.6493 (0.8335) @. 85.938 (82.779) Epoch: [60][85/176] Time 0.465 (0.478) Data 0.001 (0.007) Loss 0.6132 (0.7895) @. 84.375 (83.598) Epoch: [60][102/176] Time 0.470 (0.477) Data 0.001 (0.006) Loss 0.6502 (0.7627) @. 88.281 (84.166) Epoch: [60][119/176] Time 0.466 (0.476) Data 0.001 (0.005) Loss 0.6334 (0.7413) @. 88.281 (84.626) Epoch: [60][136/176] Time 0.464 (0.475) Data 0.001 (0.005) Loss 0.6120 (0.7264) @. 87.500 (85.076) Epoch: [60][153/176] Time 0.466 (0.475) Data 0.001 (0.005) Loss 0.4708 (0.7124) @. 87.500 (85.382) Epoch: [60][170/176] Time 0.459 (0.475) Data 0.000 (0.004) Loss 0.7082 (0.7008) @. 85.547 (85.609) **@. 84.740 **@. 77.120 **@. 80.720 **@. 78.060 Noisy **@. 78.63 New best: 78.63333 My logs: Loaded pretrained model from /home/houyq/code/temp/Resnet32-ICLR-master/Resources/cifar10_pretrained_models/resnet32.th Files already downloaded and verified Files already downloaded and verified Files already downloaded and verified Use AdversarialLoss /opt/conda/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:139: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-ratehttps://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of lr_scheduler.step() before optimizer.step(). " @. 99.440 * @. 14.440 **@. 14.540 **@. 13.240 **@. 15.700 Noisy **@. 14.49 After loading, with @. 14.44 w/o 99.44 noisy: 14.49pm1.23 current lr 2.00000e-02 Epoch: [60][0/176] Time 0.986 (0.986) Data 0.538 (0.538) Loss 1.0458 (1.0458) @. 90.234 (90.234) Epoch: [60][17/176] Time 0.204 (0.244) Data 0.001 (0.031) Loss 0.7880 (0.8165) @. 92.188 (94.271) Epoch: [60][34/176] Time 0.193 (0.221) Data 0.001 (0.016) Loss 0.6808 (0.7606) @. 90.625 (92.835) Epoch: [60][51/176] Time 0.205 (0.218) Data 0.001 (0.011) Loss 0.5688 (0.7148) @. 94.141 (93.014) Epoch: [60][68/176] Time 0.195 (0.215) Data 0.001 (0.009) Loss 0.4856 (0.6743) @. 95.703 (93.342) Epoch: [60][85/176] Time 0.204 (0.213) Data 0.001 (0.007) Loss 0.5070 (0.6442) @. 97.266 (93.668) Epoch: [60][102/176] Time 0.198 (0.212) Data 0.001 (0.006) Loss 0.5217 (0.6320) @. 96.094 (93.617) Epoch: [60][119/176] Time 0.203 (0.211) Data 0.001 (0.005) Loss 0.4916 (0.6178) @. 95.312 (93.776) Epoch: [60][136/176] Time 0.211 (0.211) Data 0.001 (0.005) Loss 0.5097 (0.6070) @. 95.312 (93.773) Epoch: [60][153/176] Time 0.197 (0.209) Data 0.001 (0.004) Loss 0.4161 (0.5945) @. 95.703 (93.925) Epoch: [60][170/176] Time 0.189 (0.209) Data 0.000 (0.004) Loss 0.5926 (0.5860) @. 93.359 (94.010) **@. 88.740 **@. 79.260 **@. 83.980 **@. 87.980 Noisy **@. 83.74 New best: 83.74000 — Reply to this email directly, view it on GitHub<#1>, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AHK2FWNBXHKZM5FJKN2S343YJAEEFAVCNFSM6AAAAABARAGT7CVHI2DSMVQWIX3LMV43ASLTON2WKOZSGAZTOMJXHAYTOOA. You are receiving this because you are subscribed to this thread.Message ID: **@.***>

Thank you for your reply. It looks like there's a fixed seed in the code. Could you provide the submit version or other more information that can address this gap?

jubueche commented 8 months ago

Hi,

To what number are you referring to exactly?

All the best


From: houyaoqi17 @.> Sent: Tuesday, December 12, 2023 11:33:08 AM To: jubueche/Resnet32-ICLR @.> Cc: Julian Büchel @.>; Comment @.> Subject: [EXTERNAL] Re: [jubueche/Resnet32-ICLR] Inconsistent results (Issue #1)

Hi, Is the code deterministic? I remember that this was for the rebuttal so we quickly uploaded the code afterwards. Maybe we forgot to set a seed. But maybe we also used a different commit hash. All the best, Julian … ____ ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. https://us-phishalarm-ewt.proofpoint.com/EWT/v1/PjiDSg!2Q-qihQ34BPxsIZXkikuRHsPor47kPVA1uvLreI7PFyTL73er3Sj5N5sqhZEXt2XekneOo1kaGgkIiZ9Vnzq-rjA80ZlZititYoAYVd7wA$ Report Suspicious

ZjQcmQRYFpfptBannerEnd

Hi, Is the code deterministic? I remember that this was for the rebuttal so we quickly uploaded the code afterwards. Maybe we forgot to set a seed. But maybe we also used a different commit hash. All the best, Julian … ____ From: houyaoqi17 @.> Sent: Tuesday, December 12, 2023 8:50:26 AM To: jubueche/Resnet32-ICLR @.> Cc: Subscribed @.> Subject: [EXTERNAL] [jubueche/Resnet32-ICLR] Inconsistent results (Issue #1https://github.com/jubueche/Resnet32-ICLR/issues/1) Hello, thank you very much for releasing the code for reference. But when I replicated your experiment, I didn't get consistent results (my test accuracy was 89. 84, compared to 91. 12 in the code you provided). And the loss value I reproduce ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. https://us-phishalarm-ewt.proofpoint.com/EWT/v1/PjiDSg!2m-qihQ34BPxsIZXkikuRDfi2l6SWY4MphNbqUZInxZAF0tQtczis_K7p94NDBvbjY7_ffDTDxcX8dG1-jBz-15OT0s$ Report Suspicious ZjQcmQRYFpfptBannerEnd Hello, thank you very much for releasing the code for reference. But when I replicated your experiment, I didn't get consistent results (my test accuracy was 89.84, compared to 91.12 in the code you provided). And the loss value I reproduce will be smaller than what is given in the code, and converge faster. Have the experimental conditions changed in the latest code, and how can I set them to get a relatively consistent result? logs in the code:Resnet32-ICLR/Resources/Logs/5387663951.log at master · jubueche/Resnet32-ICLR · GitHubhttps://github.com/jubueche/Resnet32-ICLR/blob/master/Resources/Logs/5387663951.loghttps://github.com/jubueche/Resnet32-ICLR/blob/master/Resources/Logs/5387663951.log Loaded pretrained model from /ibm/gpfs-homes/jbu/Master-Thesis/Resources/cifar10_pretrained_models/resnet32.th Files already downloaded and verified Files already downloaded and verified Files already downloaded and verified /u/jbu/.conda/envs/msc/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:136: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-ratehttps://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-ratehttps://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-ratehttps://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-ratehttps://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-ratehttps://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-ratehttps://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning) @. 99.440 @. 14.440 @.* 14.460 @. 13.320 * @.* 15.740 Noisy @. 14.51 After loading, with @.* 14.44 w/o 99.44 noisy: 14.51pm1.21 current lr 2.00000e-02 Epoch: [60][0/176] Time 1.032 (1.032) Data 0.519 (0.519) Loss 1.2845 (1.2845) @. 76.953 (76.953) Epoch: [60][17/176] Time 0.463 (0.498) Data 0.001 (0.030) Loss 0.9598 (1.0205) @. 76.953 (80.490) Epoch: [60][34/176] Time 0.473 (0.488) Data 0.001 (0.016) Loss 0.7782 (0.9444) @. 78.906 (80.234) Epoch: [60][51/176] Time 0.463 (0.483) Data 0.001 (0.011) Loss 0.6287 (0.8837) @. 89.062 (81.438) Epoch: [60][68/176] Time 0.466 (0.480) Data 0.001 (0.009) Loss 0.6493 (0.8335) @. 85.938 (82.779) Epoch: [60][85/176] Time 0.465 (0.478) Data 0.001 (0.007) Loss 0.6132 (0.7895) @. 84.375 (83.598) Epoch: [60][102/176] Time 0.470 (0.477) Data 0.001 (0.006) Loss 0.6502 (0.7627) @. 88.281 (84.166) Epoch: [60][119/176] Time 0.466 (0.476) Data 0.001 (0.005) Loss 0.6334 (0.7413) @. 88.281 (84.626) Epoch: [60][136/176] Time 0.464 (0.475) Data 0.001 (0.005) Loss 0.6120 (0.7264) @. 87.500 (85.076) Epoch: [60][153/176] Time 0.466 (0.475) Data 0.001 (0.005) Loss 0.4708 (0.7124) @. 87.500 (85.382) Epoch: [60][170/176] Time 0.459 (0.475) Data 0.000 (0.004) Loss 0.7082 (0.7008) @. 85.547 (85.609) @.* 84.740 @. 77.120 * @.* 80.720 @. 78.060 * Noisy @.** 78.63 New best: 78.63333 My logs: Loaded pretrained model from /home/houyq/code/temp/Resnet32-ICLR-master/Resources/cifar10_pretrained_models/resnet32.th Files already downloaded and verified Files already downloaded and verified Files already downloaded and verified Use AdversarialLoss /opt/conda/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:139: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-ratehttps://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-ratehttps://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-ratehttps://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of lr_scheduler.step() before optimizer.step(). " @. 99.440 * @. 14.440 @.* 14.540 @. 13.240 * @.* 15.700 Noisy @. 14.49 After loading, with @.* 14.44 w/o 99.44 noisy: 14.49pm1.23 current lr 2.00000e-02 Epoch: [60][0/176] Time 0.986 (0.986) Data 0.538 (0.538) Loss 1.0458 (1.0458) @. 90.234 (90.234) Epoch: [60][17/176] Time 0.204 (0.244) Data 0.001 (0.031) Loss 0.7880 (0.8165) @. 92.188 (94.271) Epoch: [60][34/176] Time 0.193 (0.221) Data 0.001 (0.016) Loss 0.6808 (0.7606) @. 90.625 (92.835) Epoch: [60][51/176] Time 0.205 (0.218) Data 0.001 (0.011) Loss 0.5688 (0.7148) @. 94.141 (93.014) Epoch: [60][68/176] Time 0.195 (0.215) Data 0.001 (0.009) Loss 0.4856 (0.6743) @. 95.703 (93.342) Epoch: [60][85/176] Time 0.204 (0.213) Data 0.001 (0.007) Loss 0.5070 (0.6442) @. 97.266 (93.668) Epoch: [60][102/176] Time 0.198 (0.212) Data 0.001 (0.006) Loss 0.5217 (0.6320) @. 96.094 (93.617) Epoch: [60][119/176] Time 0.203 (0.211) Data 0.001 (0.005) Loss 0.4916 (0.6178) @. 95.312 (93.776) Epoch: [60][136/176] Time 0.211 (0.211) Data 0.001 (0.005) Loss 0.5097 (0.6070) @. 95.312 (93.773) Epoch: [60][153/176] Time 0.197 (0.209) Data 0.001 (0.004) Loss 0.4161 (0.5945) @. 95.703 (93.925) Epoch: [60][170/176] Time 0.189 (0.209) Data 0.000 (0.004) Loss 0.5926 (0.5860) @.* 93.359 (94.010) @. 88.740 * @.* 79.260 @. 83.980 * @.* 87.980 Noisy @. 83.74 * New best: 83.74000 — Reply to this email directly, view it on GitHub<#1https://github.com/jubueche/Resnet32-ICLR/issues/1>, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AHK2FWNBXHKZM5FJKN2S343YJAEEFAVCNFSM6AAAAABARAGT7CVHI2DSMVQWIX3LMV43ASLTON2WKOZSGAZTOMJXHAYTOOAhttps://github.com/notifications/unsubscribe-auth/AHK2FWNBXHKZM5FJKN2S343YJAEEFAVCNFSM6AAAAABARAGT7CVHI2DSMVQWIX3LMV43ASLTON2WKOZSGAZTOMJXHAYTOOA. You are receiving this because you are subscribed to this thread.Message ID: @.***>

Thank you for your reply. It looks like there's a fixed seed in the code. Could you provide the submit version or other more information that can address this gap?

— Reply to this email directly, view it on GitHubhttps://github.com/jubueche/Resnet32-ICLR/issues/1#issuecomment-1851767408, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AHK2FWPNQUF72GI7CCGD4P3YJAXGJAVCNFSM6AAAAABARAGT7CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNJRG43DONBQHA. You are receiving this because you commented.Message ID: @.***>

houyaoqi17 commented 8 months ago

Hi, To what number are you referring to exactly? All the best ____ From: houyaoqi17 @.> Sent: Tuesday, December 12, 2023 11:33:08 AM To: jubueche/Resnet32-ICLR @.> Cc: Julian Büchel @.>; Comment @.> Subject: [EXTERNAL] Re: [jubueche/Resnet32-ICLR] Inconsistent results (Issue #1) Hi, Is the code deterministic? I remember that this was for the rebuttal so we quickly uploaded the code afterwards. Maybe we forgot to set a seed. But maybe we also used a different commit hash. All the best, Julian … ____ ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. https://us-phishalarm-ewt.proofpoint.com/EWT/v1/PjiDSg!2Q-qihQ34BPxsIZXkikuRHsPor47kPVA1uvLreI7PFyTL73er3Sj5N5sqhZEXt2XekneOo1kaGgkIiZ9Vnzq-rjA80ZlZititYoAYVd7wA$ Report Suspicious ZjQcmQRYFpfptBannerEnd Hi, Is the code deterministic? I remember that this was for the rebuttal so we quickly uploaded the code afterwards. Maybe we forgot to set a seed. But maybe we also used a different commit hash. All the best, Julian … ____ From: houyaoqi17 @.> Sent: Tuesday, December 12, 2023 8:50:26 AM To: jubueche/Resnet32-ICLR @.> Cc: Subscribed @.> Subject: [EXTERNAL] [jubueche/Resnet32-ICLR] Inconsistent results (Issue #1<#1>) Hello, thank you very much for releasing the code for reference. But when I replicated your experiment, I didn't get consistent results (my test accuracy was 89. 84, compared to 91. 12 in the code you provided). And the loss value I reproduce ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. https://us-phishalarm-ewt.proofpoint.com/EWT/v1/PjiDSg!2m-qihQ34BPxsIZXkikuRDfi2l6SWY4MphNbqUZInxZAF0tQtczis_K7p94NDBvbjY7_ffDTDxcX8dG1-jBz-15OT0s$ Report Suspicious ZjQcmQRYFpfptBannerEnd Hello, thank you very much for releasing the code for reference. But when I replicated your experiment, I didn't get consistent results (my test accuracy was 89.84, compared to 91.12 in the code you provided). And the loss value I reproduce will be smaller than what is given in the code, and converge faster. Have the experimental conditions changed in the latest code, and how can I set them to get a relatively consistent result? logs in the code:Resnet32-ICLR/Resources/Logs/5387663951.log at master · jubueche/Resnet32-ICLR · GitHubhttps://github.com/jubueche/Resnet32-ICLR/blob/master/Resources/Logs/5387663951.loghttps://github.com/jubueche/Resnet32-ICLR/blob/master/Resources/Logs/5387663951.log Loaded pretrained model from /ibm/gpfs-homes/jbu/Master-Thesis/Resources/cifar10_pretrained_models/resnet32.th Files already downloaded and verified Files already downloaded and verified Files already downloaded and verified /u/jbu/.conda/envs/msc/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:136: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-ratehttps://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-ratehttps://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-ratehttps://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-ratehttps://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-ratehttps://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-ratehttps://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning) @. 99.440 @. 14.440 @.* 14.460 @. 13.320 * @.* 15.740 Noisy @. 14.51 After loading, with @.* 14.44 w/o 99.44 noisy: 14.51pm1.21 current lr 2.00000e-02 Epoch: [60][0/176] Time 1.032 (1.032) Data 0.519 (0.519) Loss 1.2845 (1.2845) @. 76.953 (76.953) Epoch: [60][17/176] Time 0.463 (0.498) Data 0.001 (0.030) Loss 0.9598 (1.0205) @. 76.953 (80.490) Epoch: [60][34/176] Time 0.473 (0.488) Data 0.001 (0.016) Loss 0.7782 (0.9444) @. 78.906 (80.234) Epoch: [60][51/176] Time 0.463 (0.483) Data 0.001 (0.011) Loss 0.6287 (0.8837) @. 89.062 (81.438) Epoch: [60][68/176] Time 0.466 (0.480) Data 0.001 (0.009) Loss 0.6493 (0.8335) @. 85.938 (82.779) Epoch: [60][85/176] Time 0.465 (0.478) Data 0.001 (0.007) Loss 0.6132 (0.7895) @. 84.375 (83.598) Epoch: [60][102/176] Time 0.470 (0.477) Data 0.001 (0.006) Loss 0.6502 (0.7627) @. 88.281 (84.166) Epoch: [60][119/176] Time 0.466 (0.476) Data 0.001 (0.005) Loss 0.6334 (0.7413) @. 88.281 (84.626) Epoch: [60][136/176] Time 0.464 (0.475) Data 0.001 (0.005) Loss 0.6120 (0.7264) @. 87.500 (85.076) Epoch: [60][153/176] Time 0.466 (0.475) Data 0.001 (0.005) Loss 0.4708 (0.7124) @. 87.500 (85.382) Epoch: [60][170/176] Time 0.459 (0.475) Data 0.000 (0.004) Loss 0.7082 (0.7008) @. 85.547 (85.609) @.* 84.740 @. 77.120 * @.* 80.720 @. 78.060 * Noisy @.** 78.63 New best: 78.63333 My logs: Loaded pretrained model from /home/houyq/code/temp/Resnet32-ICLR-master/Resources/cifar10_pretrained_models/resnet32.th Files already downloaded and verified Files already downloaded and verified Files already downloaded and verified Use AdversarialLoss /opt/conda/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:139: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-ratehttps://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-ratehttps://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-ratehttps://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of lr_scheduler.step() before optimizer.step(). " @. 99.440 * @. 14.440 @.* 14.540 @. 13.240 * @.* 15.700 Noisy @. 14.49 After loading, with @.* 14.44 w/o 99.44 noisy: 14.49pm1.23 current lr 2.00000e-02 Epoch: [60][0/176] Time 0.986 (0.986) Data 0.538 (0.538) Loss 1.0458 (1.0458) @. 90.234 (90.234) Epoch: [60][17/176] Time 0.204 (0.244) Data 0.001 (0.031) Loss 0.7880 (0.8165) @. 92.188 (94.271) Epoch: [60][34/176] Time 0.193 (0.221) Data 0.001 (0.016) Loss 0.6808 (0.7606) @. 90.625 (92.835) Epoch: [60][51/176] Time 0.205 (0.218) Data 0.001 (0.011) Loss 0.5688 (0.7148) @. 94.141 (93.014) Epoch: [60][68/176] Time 0.195 (0.215) Data 0.001 (0.009) Loss 0.4856 (0.6743) @. 95.703 (93.342) Epoch: [60][85/176] Time 0.204 (0.213) Data 0.001 (0.007) Loss 0.5070 (0.6442) @. 97.266 (93.668) Epoch: [60][102/176] Time 0.198 (0.212) Data 0.001 (0.006) Loss 0.5217 (0.6320) @. 96.094 (93.617) Epoch: [60][119/176] Time 0.203 (0.211) Data 0.001 (0.005) Loss 0.4916 (0.6178) @. 95.312 (93.776) Epoch: [60][136/176] Time 0.211 (0.211) Data 0.001 (0.005) Loss 0.5097 (0.6070) @. 95.312 (93.773) Epoch: [60][153/176] Time 0.197 (0.209) Data 0.001 (0.004) Loss 0.4161 (0.5945) @. 95.703 (93.925) Epoch: [60][170/176] Time 0.189 (0.209) Data 0.000 (0.004) Loss 0.5926 (0.5860) @. 93.359 (94.010) @.* 88.740 @. 79.260 * @.* 83.980 @. 87.980 * Noisy @.* 83.74 New best: 83.74000 — Reply to this email directly, view it on GitHub<#1<#1>>, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AHK2FWNBXHKZM5FJKN2S343YJAEEFAVCNFSM6AAAAABARAGT7CVHI2DSMVQWIX3LMV43ASLTON2WKOZSGAZTOMJXHAYTOOAhttps://github.com/notifications/unsubscribe-auth/AHK2FWNBXHKZM5FJKN2S343YJAEEFAVCNFSM6AAAAABARAGT7CVHI2DSMVQWIX3LMV43ASLTON2WKOZSGAZTOMJXHAYTOOA. You are receiving this because you are subscribed to this thread.Message ID: @.> Thank you for your reply. It looks like there's a fixed seed in the code. Could you provide the submit version or other more information that can address this gap? — Reply to this email directly, view it on GitHub<#1 (comment)>, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AHK2FWPNQUF72GI7CCGD4P3YJAXGJAVCNFSM6AAAAABARAGT7CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNJRG43DONBQHA. You are receiving this because you commented.Message ID: *@.>

python3 trainer_resnet.py -seed=1 -batch_size=256 -weight_decay=0.0005 -momentum=0.9 -save_every=10 -lr=0.1 -n_attack_steps=3 -beta_robustness=0.1 -eta_train=0.11 -eta_mode=range -clipping_alpha=2.0 -attack_size_mismatch=0.05 -initial_std=0.001 -pretrained -burn_in=0 -workers=4 -n_epochs=300 -dataset=cifar10 -architecture=resnet32 -start_epoch=60 -data_dir=/dataP/jbu -session_id=5387663951
The command is like this. Here specifies the random number to divide the validation data set. There doesn't seem to be any other random number specified in the code, but the impact of random numbers shouldn't be that great?

jubueche commented 8 months ago

Could you format the log so that it is easier to read? Also, where did you get this 91. 12% from? Sorry, it has been some time since I looked at that code.

houyaoqi17 commented 8 months ago

Could you format the log so that it is easier to read? Also, where did you get this 91. 12% from? Sorry, it has been some time since I looked at that code.

I used the command from here: https://github.com/jubueche/Resnet32-ICLR/blob/master/Resources/Logs/5387663951.log#L16

and the result of 91. 12% comes from here: https://github.com/jubueche/Resnet32-ICLR/blob/master/Resources/Logs/5387663951.log#L4176

jubueche commented 8 months ago

Hi,

This log seems to be from November 8 (see the timestamp). Try a commit that matches this time roughly (i.e. search for a commit where the commit date is after Nov 8.) and try again. I think in your log that you sent, the number of epochs in the end is suspiciously low.

All the best, Julian

From: houyaoqi17 @.> Date: Tuesday, 12 December 2023 at 15:57 To: jubueche/Resnet32-ICLR @.> Cc: Julian Büchel @.>, Comment @.> Subject: [EXTERNAL] Re: [jubueche/Resnet32-ICLR] Inconsistent results (Issue #1) Could you format the log so that it is easier to read? Also, where did you get this 91. 12% from? Sorry, it has been some time since I looked at that code. I used the command from here: https: //github. com/jubueche/Resnet32-ICLR/blob/master/Resources/Logs/5387663951. log#L16 ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. https://us-phishalarm-ewt.proofpoint.com/EWT/v1/PjiDSg!2Q-qihQ34BPxsIZXkikuRHsPor47kPVA1uvLreI7PFyTL73er3TDRB6M6rdkXt2Xq89jsFDwZz0qqIjEkwPyyPdsi3mXRwXpa1qTJNjyrg$ Report Suspicious https://us-phishalarm-ewt.proofpoint.com/EWT/v1/PjiDSg!2Q-qihQ34BPxsIZXkikuRHsPor47kPVA1uvLreI7PFyTL73er3TDRB6M6rdkXt2Xq89jsFDwZz0qqIjEkwPyyPdsi3mXRwXpa1qTJNjyrg$

ZjQcmQRYFpfptBannerEnd

Could you format the log so that it is easier to read? Also, where did you get this 91. 12% from? Sorry, it has been some time since I looked at that code.

I used the command from here: https://github.com/jubueche/Resnet32-ICLR/blob/master/Resources/Logs/5387663951.log#L16https://github.com/jubueche/Resnet32-ICLR/blob/master/Resources/Logs/5387663951.log#L16

and the result of 91. 12% comes from here: https://github.com/jubueche/Resnet32-ICLR/blob/master/Resources/Logs/5387663951.log#L4176https://github.com/jubueche/Resnet32-ICLR/blob/master/Resources/Logs/5387663951.log#L4176

— Reply to this email directly, view it on GitHubhttps://github.com/jubueche/Resnet32-ICLR/issues/1#issuecomment-1852200651, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AHK2FWLJC4UN2PXZVQQTYPLYJBWGFAVCNFSM6AAAAABARAGT7CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNJSGIYDANRVGE. You are receiving this because you commented.Message ID: @.***>

houyaoqi17 commented 8 months ago

Hi, This log seems to be from November 8 (see the timestamp). Try a commit that matches this time roughly (i.e. search for a commit where the commit date is after Nov 8.) and try again. I think in your log that you sent, the number of epochs in the end is suspiciously low. All the best, Julian From: houyaoqi17 @.> Date: Tuesday, 12 December 2023 at 15:57 To: jubueche/Resnet32-ICLR @.> Cc: Julian Büchel @.>, Comment @.> Subject: [EXTERNAL] Re: [jubueche/Resnet32-ICLR] Inconsistent results (Issue #1) Could you format the log so that it is easier to read? Also, where did you get this 91. 12% from? Sorry, it has been some time since I looked at that code. I used the command from here: https: //github. com/jubueche/Resnet32-ICLR/blob/master/Resources/Logs/5387663951. log#L16 ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. https://us-phishalarm-ewt.proofpoint.com/EWT/v1/PjiDSg!2Q-qihQ34BPxsIZXkikuRHsPor47kPVA1uvLreI7PFyTL73er3TDRB6M6rdkXt2Xq89jsFDwZz0qqIjEkwPyyPdsi3mXRwXpa1qTJNjyrg$ Report Suspicious https://us-phishalarm-ewt.proofpoint.com/EWT/v1/PjiDSg!2Q-qihQ34BPxsIZXkikuRHsPor47kPVA1uvLreI7PFyTL73er3TDRB6M6rdkXt2Xq89jsFDwZz0qqIjEkwPyyPdsi3mXRwXpa1qTJNjyrg$ ZjQcmQRYFpfptBannerEnd Could you format the log so that it is easier to read? Also, where did you get this 91. 12% from? Sorry, it has been some time since I looked at that code. I used the command from here: https://github.com/jubueche/Resnet32-ICLR/blob/master/Resources/Logs/5387663951.log#L16https://github.com/jubueche/Resnet32-ICLR/blob/master/Resources/Logs/5387663951.log#L16 and the result of 91. 12% comes from here: https://github.com/jubueche/Resnet32-ICLR/blob/master/Resources/Logs/5387663951.log#L4176https://github.com/jubueche/Resnet32-ICLR/blob/master/Resources/Logs/5387663951.log#L4176 — Reply to this email directly, view it on GitHub<#1 (comment)>, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AHK2FWLJC4UN2PXZVQQTYPLYJBWGFAVCNFSM6AAAAABARAGT7CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNJSGIYDANRVGE. You are receiving this because you commented.Message ID: @.***>

Thanks for your reply. Based on the time you provided, I found the initial version of the code(Initial · jubueche/Resnet32-ICLR@ae84fa1 (github.com)), and the commit timestamp is Nov 22, 2021.

I found out that a package named ais was used here(Resnet32-ICLR/trainer_resnet.py at ae84fa19b5e1293dcb58c8047804861c87d65292 · jubueche/Resnet32-ICLR (github.com)), but it was removed in later code. So the model was trained and tested under these conditions?

jubueche commented 8 months ago

Hi,

Yes, AIS is a deprecated internal codebase of IBM. We’re not using it anymore for HW-aware training since we have the much better AIHWKIT. However, using the adversarial loss function from the paper should still be possible even with AIHWKIT. If you want, we could setup a meeting to discuss further. Just write an e-mail to @.***

All the best, Julian


From: houyaoqi17 @.> Sent: Wednesday, December 13, 2023 9:28:19 AM To: jubueche/Resnet32-ICLR @.> Cc: Julian Büchel @.>; Comment @.> Subject: [EXTERNAL] Re: [jubueche/Resnet32-ICLR] Inconsistent results (Issue #1)

Hi, This log seems to be from November 8 (see the timestamp). Try a commit that matches this time roughly (i. e. search for a commit where the commit date is after Nov 8. ) and try again. I think in your log that you sent, the number of epochs ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. https://us-phishalarm-ewt.proofpoint.com/EWT/v1/PjiDSg!2Q-qihQ34BPxsIZXkikuRHsPor47kPVA1uvLreI7PFyTL73er3TjhN5s6heEXt2X2k47tlEDe7NeaZKa4Q4FkITudf3JuuBd36V7Dq2ulg$ Report Suspicious

ZjQcmQRYFpfptBannerEnd

Hi, This log seems to be from November 8 (see the timestamp). Try a commit that matches this time roughly (i.e. search for a commit where the commit date is after Nov 8.) and try again. I think in your log that you sent, the number of epochs in the end is suspiciously low. All the best, Julian From: houyaoqi17 @.> Date: Tuesday, 12 December 2023 at 15:57 To: jubueche/Resnet32-ICLR @.> Cc: Julian Büchel @.>, Comment @.> Subject: [EXTERNAL] Re: [jubueche/Resnet32-ICLR] Inconsistent results (Issue #1https://github.com/jubueche/Resnet32-ICLR/issues/1) Could you format the log so that it is easier to read? Also, where did you get this 91. 12% from? Sorry, it has been some time since I looked at that code. I used the command from here: https: //github. com/jubueche/Resnet32-ICLR/blob/master/Resources/Logs/5387663951. log#L16 ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. https://us-phishalarm-ewt.proofpoint.com/EWT/v1/PjiDSg!2Q-qihQ34BPxsIZXkikuRHsPor47kPVA1uvLreI7PFyTL73er3TDRB6M6rdkXt2Xq89jsFDwZz0qqIjEkwPyyPdsi3mXRwXpa1qTJNjyrg$ Report Suspicious https://us-phishalarm-ewt.proofpoint.com/EWT/v1/PjiDSg!2Q-qihQ34BPxsIZXkikuRHsPor47kPVA1uvLreI7PFyTL73er3TDRB6M6rdkXt2Xq89jsFDwZz0qqIjEkwPyyPdsi3mXRwXpa1qTJNjyrg$ ZjQcmQRYFpfptBannerEnd Could you format the log so that it is easier to read? Also, where did you get this 91. 12% from? Sorry, it has been some time since I looked at that code. I used the command from here: https://github.com/jubueche/Resnet32-ICLR/blob/master/Resources/Logs/5387663951.log#L16https://github.com/jubueche/Resnet32-ICLR/blob/master/Resources/Logs/5387663951.log#L16https://github.com/jubueche/Resnet32-ICLR/blob/master/Resources/Logs/5387663951.log#L16https://github.com/jubueche/Resnet32-ICLR/blob/master/Resources/Logs/5387663951.log#L16 and the result of 91. 12% comes from here: https://github.com/jubueche/Resnet32-ICLR/blob/master/Resources/Logs/5387663951.log#L4176https://github.com/jubueche/Resnet32-ICLR/blob/master/Resources/Logs/5387663951.log#L4176https://github.com/jubueche/Resnet32-ICLR/blob/master/Resources/Logs/5387663951.log#L4176https://github.com/jubueche/Resnet32-ICLR/blob/master/Resources/Logs/5387663951.log#L4176 — Reply to this email directly, view it on GitHub<#1 (comment)https://github.com/jubueche/Resnet32-ICLR/issues/1#issuecomment-1852200651>, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AHK2FWLJC4UN2PXZVQQTYPLYJBWGFAVCNFSM6AAAAABARAGT7CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNJSGIYDANRVGEhttps://github.com/notifications/unsubscribe-auth/AHK2FWLJC4UN2PXZVQQTYPLYJBWGFAVCNFSM6AAAAABARAGT7CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNJSGIYDANRVGE. You are receiving this because you commented.Message ID: @.***>

Thanks for your reply. Based on the time you provided, I found the initial version of the code(Initial · @.*** (github.com)https://github.com/jubueche/Resnet32-ICLR/commit/ae84fa19b5e1293dcb58c8047804861c87d65292), and the commit timestamp is Nov 22, 2021.

I found out that a package named ais was used here(Resnet32-ICLR/trainer_resnet.py at ae84fa19b5e1293dcb58c8047804861c87d65292 · jubueche/Resnet32-ICLR (github.com)https://github.com/jubueche/Resnet32-ICLR/issues/1#L15-L55), but it was removed in later code. So the model was trained and tested under these conditions?

— Reply to this email directly, view it on GitHubhttps://github.com/jubueche/Resnet32-ICLR/issues/1#issuecomment-1853467606, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AHK2FWLQO2PS62G22Y6ZGELYJFRKHAVCNFSM6AAAAABARAGT7CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNJTGQ3DONRQGY. You are receiving this because you commented.Message ID: @.***>