Closed szpxmu closed 3 years ago
Hi @hellogry
Try this: https://github.com/swz30/MPRNet/issues/51#issuecomment-843802843
I've used it and it doesn't work
Can you share the training log for few epochs? It will be fine as long as the loss is gradually decreasing (and not exploding).
All losses remained almost at around 1442. The minimum is 1441. How do I display the training log.
Hi @hellogry
To save the logs, you can use
python train.py | tee deraining_logs.txt
Thanks
Isn't this normal? I'm not sure about that but I think that it's because the loss is epoch loss, the sum of every iteration loss. Divide by the training images, the iteration loss is about 0.8. It's acceptable and correct I think.
the dataset has1800 training data, 200 test data, Why the loss is so high, I try to change the learning rate to 1.5e-4. but it is no helpful