IntelLabs / MART

Modular Adversarial Robustness Toolkit
BSD 3-Clause "New" or "Revised" License
16 stars 0 forks source link

Automatically check training and eval mode differences #165

Open dxoigmn opened 1 year ago

dxoigmn commented 1 year ago

Model authors sometimes use nn.Module.training to change the control flow of their model. This is problematic because we often make the assumption a model in training more or less produces the same result as in eval mode. We should detect when this is not the case and warn the user so they can take appropriate action!

dxoigmn commented 1 year ago

I will that it is also possible to change control on other things like whether groundtruth is present or not. Detecting this kind of behavior should probably be in scope too. This is done, for example, in this implementation of YOLOv4: https://github.com/AlexeyAB/Yet-Another-YOLOv4-Pytorch/blob/d80d6a20372598b6306b37218cb61533e8bd9592/model.py#L893

Thankfully that code doesn't actually change anything about the output just whether it computes a loss or not.