Closed phonchi closed 5 years ago
The training loss not decreasing (much) past the first few epochs is normal and expected behaviour when training with the noise2noise framework. We are training the model to perform an impossible task (reconstruct one noisy image from another), so don't expect this loss to get very small. If it does, the model is probably overfitting.
I suspect the drop in performance is from the first half/last half frame split. My suggestion is to split into even/odd frames instead. Also, are you summing these or treating each frame as an individual image (so, each micrograph would generate n/2 pairs where n is the number of frames)? If so, you may want to try summing these to generate one pair per movie. We've found that training on the even/odd split with summing gives good results.
Thanks!! I will try to use the even/odd split with summing.
It turns out that the even/odd split with summing works. Thanks again for the help!
Great, glad that helped!
Dear developers:
I notice that there is a denoise model added in the v0.2 version. I would like to training from scratch using our own data. My idea is to put the first half of frames of all movie files into directory A and put the last half of frames of all movie files into directory B. Then run the training script using
topaz denoise -a A -b B --save-prefix out.
(I have 342 image pairs for training and 38 image pairs for validation)However, the training loss does not decrease much in the 100 epoch and denoising using the final model is much worse than the pretrained model.
Any suggestions would be highly appreciated. Thanks in advance!