IntelLabs / MART

Modular Adversarial Robustness Toolkit
BSD 3-Clause "New" or "Revised" License
16 stars 0 forks source link

Configuring optimization hyper-params in Perturber #91

Closed mzweilin closed 1 year ago

mzweilin commented 1 year ago

What does this PR do?

This PR enables configuring perturbation-wise hyper-params in attack optimization, such as learning rate, momentum, etc.

The main changes include

Type of change

Please check all relevant options.

Testing

Please describe the tests that you ran to verify your changes. Consider listing any relevant details of your test configuration.

Before submitting

Did you have fun?

Make sure you had fun coding 🙃

dxoigmn commented 1 year ago

Is this not already supported via per-parameter options? See: https://pytorch.org/docs/stable/optim.html#per-parameter-options

mzweilin commented 1 year ago

Is this not already supported via per-parameter options? See: https://pytorch.org/docs/stable/optim.html#per-parameter-options

Yes, it's supported. And this PR is trying to use this feature by constructing trainable parameters with optimization information.