I understand that after training the network, compute_bn_statistics.py must be run in order to recalculate the weights and only then I can use the inference.prototxt to test the network. I would like to plot the test loss in the same way the training loss is plotted while training, so I added to my train.prototxt
include { phase: TEST }
For my second data layer which contains the txt file that points to the test images, and let caffe run this test phase every n number of iterations. However, I would think that during this TEST phase, no batch normalization is being used, since compute_bn must be run independently at the end!, right? which means the reported TEST accuracy and loss while training is not correct!, right?
Is there any solution or idea to tackle this problem?
What is the advantage of computing BN for testing phase offline? (via compute_bn_statistics.py)
I understand caffe's BN layer does the same, but online. Does anyone have an example on how to write the train.protxt using caffe BN layer instead of SegNet's?
Good day,
I understand that after training the network, compute_bn_statistics.py must be run in order to recalculate the weights and only then I can use the inference.prototxt to test the network. I would like to plot the test loss in the same way the training loss is plotted while training, so I added to my train.prototxt
include { phase: TEST }
For my second data layer which contains the txt file that points to the test images, and let caffe run this test phase every n number of iterations. However, I would think that during this TEST phase, no batch normalization is being used, since compute_bn must be run independently at the end!, right? which means the reported TEST accuracy and loss while training is not correct!, right?
Thank you very much for your time, CG