MadryLab / robustness

A library for experimenting with, training and evaluating neural networks, with a focus on adversarial robustness.
MIT License
903 stars 181 forks source link

Allowing a more flexible forward pass #88

Open Aaron-Zhao123 opened 3 years ago

Aaron-Zhao123 commented 3 years ago

The current AttackerModel class only supports passing pre-defined paramters (like make_adv ...) and attacker_kwargs.

For some slightly more advanced users, I think it is common to have models with addtional arguments (parmeters that are more than inputs and outputs), such as:

class MyModel(torch.nn.Module):
...
def forward(self, inputs, targets, random_x, random_y ...):
   ...

The current AttackerModel class do not really give the flexibility to handle inputs like random_x and random_y.

Does it worth giving the user the flexibility to call their wrapped model with addtiional input parameters? I think simply adding something like

    def forward(self,
                inp,
                target=None,
                make_adv=False,
                with_latent=False,
                fake_relu=False,
                no_relu=False,
                with_image=True,
                forward_args=None,
                **attacker_kwargs,):
...
        output = self.model(normalized_inp,
                            with_latent=with_latent,
                            fake_relu=fake_relu,
                            no_relu=no_relu, **forward_args)

in the AttackerModel can do it?

Thanks for open sourcing the great package though.

andrewilyas commented 3 years ago

Hi, thank you for the suggestion, it sounds like a good idea! I'm not sure when we'll get around to it but feel free to submit a PR.