Open junwenxiong opened 1 year ago
Hello
It is normal for the loss to fluctuate up and down during the short-term iteration process. If you output the loss of the entire training process, you will find that the loss has been decreasing.
In addition, the training data used in each epoch is randomly extracted from the data set. Because the data set is too large, it is not recommended to use all the data for training. For details, you can check the code in the dataloader.
good luck
I tried to train the TMFI-Net from scratch. But the loss i got is abnormal.
Why this happend?
I'm getting the same issue when training from scratch with DHF1K training set. Have you solved this since?
I tried to train the TMFI-Net from scratch. But the loss i got is abnormal.
Why this happend?