Closed zbb1111 closed 7 months ago
Thanks for your interest in our work.
Yes,the training loss of deeplab normal is NaN,What is the reason for this? How should I solve it?
In addition,Can I take a look at the result graph of the generated visual pseudocode, sema_seg?
I'm sorry to trouble you with so many questions. I hope you can reply to me when you have time. Sincere thanks!!
I found that Deeplab stopped at 99 iterations during training. Is this normal? I have run it several times and it's like this. Can this be the reason for the loss NAN
The Nan problem always occurs in the first few iterations. You can observe the loss of the first few iterations to judge whether it is running normally.
I have solved this problem, thank you very much!!! Wishing you all the best!
Hello, thank you very much for your excellent article. I have some confusion and I hope you can help me answer it: