fix bug in batch renorm -> running means were not registered as parameters or buffers, so they didn't appear in the model state_dict. This means that they weren't saved when the model was saved, causing an unwanted behavior when pretrained models are use. Now the running stats are registered as buffers (no gradient accumulation, no insertion in the parameter list, but still in the state_dict of the net).
Fix bug in functions in utils.py that deal with batch norm and renorm. The method used to scan the model to find those layer didn't work for any model architecture. Now these functions are simpler, more legible and they work with all the possible model architectures.
Some fixes for PEP-8 style compliance.
NB: the file ar_free_lat_replay.py is not changed, even if the diff tool signal that every line is changed. This is because you used CRLF for new line (default on windows) while I used LF (default on linux). I was able to set git in order to not track newline difference as actual difference in the files, but I don't know why I cannot force it on ar_free_lat_replay.py. My bad, accept as is, I checked and it's not different from your file.
utils.py
that deal with batch norm and renorm. The method used to scan the model to find those layer didn't work for any model architecture. Now these functions are simpler, more legible and they work with all the possible model architectures.NB: the file
ar_free_lat_replay.py
is not changed, even if the diff tool signal that every line is changed. This is because you used CRLF for new line (default on windows) while I used LF (default on linux). I was able to set git in order to not track newline difference as actual difference in the files, but I don't know why I cannot force it onar_free_lat_replay.py
. My bad, accept as is, I checked and it's not different from your file.