mil-tokyo / MCD_DA

MIT License
558 stars 153 forks source link

Loss is becoming negative #12

Open jindongwang opened 5 years ago

jindongwang commented 5 years ago

Very nice work! During my training, I found that loss can become negative:

Train Epoch: 198 [0/100 (0%)]   Loss1: 0.024868  Loss2: 0.022132      Discrepancy: 0.018226

Test set: Average loss: -0.0588, Accuracy C1: 9449/10000 (94%) Accuracy C2: 9509/10000 (95%) Accuracy Ensemble: 9554/10000 (96%) 

recording record/usps_mnist_k_4_alluse_no_onestep_False_1_test.txt
Train Epoch: 199 [0/100 (0%)]   Loss1: 0.012343  Loss2: 0.020431      Discrepancy: 0.030520

Test set: Average loss: -0.0581, Accuracy C1: 9419/10000 (94%) Accuracy C2: 9518/10000 (95%) Accuracy Ensemble: 9537/10000 (95%) 

recording record/usps_mnist_k_4_alluse_no_onestep_False_1_test.txt

Do you think this is normal?

postBG commented 5 years ago

I guess so.

JiaoJinyang commented 5 years ago

Because the author used the nll_loss...

Dr-Zhou commented 5 years ago

Very nice work! During my training, I found that loss can become negative:

Train Epoch: 198 [0/100 (0%)] Loss1: 0.024868  Loss2: 0.022132      Discrepancy: 0.018226

Test set: Average loss: -0.0588, Accuracy C1: 9449/10000 (94%) Accuracy C2: 9509/10000 (95%) Accuracy Ensemble: 9554/10000 (96%) 

recording record/usps_mnist_k_4_alluse_no_onestep_False_1_test.txt
Train Epoch: 199 [0/100 (0%)] Loss1: 0.012343  Loss2: 0.020431      Discrepancy: 0.030520

Test set: Average loss: -0.0581, Accuracy C1: 9419/10000 (94%) Accuracy C2: 9518/10000 (95%) Accuracy Ensemble: 9537/10000 (95%) 

recording record/usps_mnist_k_4_alluse_no_onestep_False_1_test.txt

Do you think this is normal?

have you solve this problem?and If my pytorch version is 0.4.1.could I get the same peformance?