Closed mzweilin closed 10 months ago
You'll probably want to rename this
batch_canonicalization
orbatch_c18n
. Also, since you control what you feed as thetrain_dataloader
toAdversary
, why not canonicalize onbatch
being adict
instead of atuple
? That feels infinitely more flexible. Will you also get rid ofNormalizedAdversaryAdapter
?
Thanks for the suggestion. I have renamed it to batch_c15n
for simplicity.
I think it's good to keep the tuple (input, target)
as the canonical batch form, because the two are required by many sub-components in Adversary
, such as Enforcer
. Making it a dictionary may create an illusion that the two parameters are not required.
I would like to keep NormalizedAdversaryAdapter
for comparison with other adversary implementations, but I am going to update it in a separate PR to match the revised interface in Adversary
.
What does this PR do?
Adversary
assumes batches look like(input, target)
, but target models may work on very different forms of batches.This PR adds a configurable
Adversary.batch_c15n()
that converts raw batches into the canonical form(input, target)
, soAdversary
can extractinput
andtarget
.Adversary.batch_c15n.revert()
would convert batches back into the original form before feeding the batch to the target model.The
batch_c15n
not only converts list/tuple/dict input into the canonical tuple and vice versa, but also supports transform/untransform on input/target/batch. The flexibility allows us to reuse the same Adversary for attacking very different models in external projects. For example, we can denormalizeinput
in attacking Anomalib models.Type of change
Please check all relevant options.
Testing
Please describe the tests that you ran to verify your changes. Consider listing any relevant details of your test configuration.
pytest
CUDA_VISIBLE_DEVICES=0 python -m mart experiment=CIFAR10_CNN_Adv trainer=gpu trainer.precision=16
reports 70% (21 sec/epoch).CUDA_VISIBLE_DEVICES=0,1 python -m mart experiment=CIFAR10_CNN_Adv trainer=ddp trainer.precision=16 trainer.devices=2 model.optimizer.lr=0.2 trainer.max_steps=2925 datamodule.ims_per_batch=256 datamodule.world_size=2
reports 70% (14 sec/epoch).Before submitting
pre-commit run -a
command without errorsDid you have fun?
Make sure you had fun coding 🙃