Closed Songxinlei closed 2 years ago
"Slice::Big Grid Grad = ***" indicates that the back-propagated gradient is too large. It it normal to see this in the beginning of the training process. But it will gradually disappear. Did you find this problem in training with the BMFR Dataset or the 64spp Tungsten Dataset? Did you see improvements in PSNR in the training process?
Did you find this problem in training with the BMFR Dataset or the 64spp Tungsten Dataset? I used our own medical data, because there is a lot of black background, so it may cause the backpropagation gradient value to be too large. Did you see improvements in PSNR in the training process? PSNRs are almost stable, so they are more distressed about their stability. After I cut the black background, this still happened.
When calculating PSNR, because of the black background, the calculation err will appear to be 0, I made a change, if it is less than small when it is worth it, let err = 100. However, the big grid grd = *** still appears, and the training results are almost stable. The final training effect is also relatively poor. If you have any suggestions and help, I would appreciate it.
The problem is about a new dataset and new denoising problem, which is not very related to this project. I'll close this issue and contact you by email.
Start Training: Slice::Big Grid Grad = 217.5839 Slice::Big Grid Grad = 577.5850 Slice::Big Grid Grad = 131.2526 Slice::Big Grid Grad = 108.0106 Slice::Big Grid Grad = 136.1387 Epoch 0 Train psnr 30.87924680,loss = 0.08755242 Epoch 0 Valid psnr 32.17670458,loss = 0.08540827 best model saved
Looking at the code, it is said that “abs (sum_color) > 100”, what caused such a problem. As a result, the final test was poor!