Open qianxiao111 opened 1 year ago
I found that in the testing code (lib/model.py
, line 175), the netowork is not turned into the evaluation mode (missing self.netg.eval()
and self.netd.eval()
). As a result, the model will update the mean and var in BatchNorm2d layers.
I think that is the reason why the testing results change in each testing time.
Additionally, I found that the missing of net.eval()
will cause higher performance, as the model can learn something from the testing set. So I think this bug can lead to unconvincing results.
See discussion under issue https://github.com/samet-akcay/ganomaly/issues/83
What is it about testing the same abnomal image with a trained model that gives different results(AUC) each time? How can I solve this problem? And, When a normal sample were put into the abnomal folder for test, the test results is also bad.