IntelLabs / MART

Modular Adversarial Robustness Toolkit
BSD 3-Clause "New" or "Revised" License
16 stars 0 forks source link

Hide `Adversary` instead of `Adversary.Perturber` #159

Closed mzweilin closed 1 year ago

mzweilin commented 1 year ago

What does this PR do?

Hide the Adversary modules instead of Adversary.Perturber, so that we can make use of all Trainer features in Adversary.

We may configure not to hide Adversary for computing universal perturbations.

Depends on #144

Type of change

Please check all relevant options.

Testing

Please describe the tests that you ran to verify your changes. Consider listing any relevant details of your test configuration.

Before submitting

Did you have fun?

Make sure you had fun coding 🙃

mzweilin commented 1 year ago

Now I can see gradient logs on Tensorboard.

CUDA_VISIBLE_DEVICES=0 \
python -m mart \
experiment=CIFAR10_CNN_Adv \
fit=false \
trainer=gpu \
+trainer.limit_test_batches=1 \
+callbacks@model.modules.input_adv_test.callbacks=gradient_monitor \
+model.modules.input_adv_test.callbacks.gradient_monitor.frequency=1 \
+logger@model.modules.input_adv_test.logger=[tensorboard]
dxoigmn commented 1 year ago

I also doubt there is a logger associated with the adversary's trainer.

mzweilin commented 1 year ago

Close in favor of #160