Closed mzweilin closed 1 year ago
Is this not already supported via per-parameter options? See: https://pytorch.org/docs/stable/optim.html#per-parameter-options
Is this not already supported via per-parameter options? See: https://pytorch.org/docs/stable/optim.html#per-parameter-options
Yes, it's supported. And this PR is trying to use this feature by constructing trainable parameters with optimization information.
What does this PR do?
This PR enables configuring perturbation-wise hyper-params in attack optimization, such as learning rate, momentum, etc.
The main changes include
**optim_params
inPerturber
.Adversary
can fetch parameters and their optimization hyper-params by callingPerturber.parameter_groups()
.Type of change
Please check all relevant options.
Testing
Please describe the tests that you ran to verify your changes. Consider listing any relevant details of your test configuration.
Before submitting
pre-commit run -a
command without errorsDid you have fun?
Make sure you had fun coding 🙃