Closed JGIroro closed 1 year ago
You can test to use the code in model.py line 102, which can clamp the sigma to (1e^-10, 1e^10) and is commented out by me. The reason for NAN may be the sigma = 0.
I followed the comment above and used the same dataset and files as yours, finding that although NAN is solved, the loss still becomes larger and larger as well as MSE and bpp. The only difference between you and me maybe is the environment. I train on pytorch 1.7 and python 3.6 with cuda 11.0, is it necessary to use the same environment as yours?
After I use the same environment as yours, the training stage looks good. Thanks a lot!
I want to train a new model with my dataset but always find that the loss becomes larger and larger while PSNR becomes negative. After that, the loss drops to NAN.