Closed sacmehta closed 5 years ago
Hi @sacmehta , are you using your own dataset? The log says the regression loss is Inf. The regression loss here is Smooth L1 Loss. It might be helpful if you check out some input data and intermediate values. Plz also reference the implementation is PyTorch. https://pytorch.org/docs/stable/nn.html#torch.nn.SmoothL1Loss .
I am able to fix the above issue, but now I am getting another issue. Loss is not decreasing. Total loss is fixed at around 12. Any idea what might go wrong?
P.S.: I am using PyTorch weights and not Caffe weights.
Hi @sacmehta , have you tried a smaller learning rate? Also, try a small subset of the training data to verify the process is right. If the process is all right, you should get a overfitted model with 0 loss.
@sacmehta Hi, are you able to share your pretrained PyTorch ImageNet weights? I can try to reproduce it, since I am working on a similar project.
@alanstark If you use PyTorch’s vision api, you should be able to download them by setting the pretrained argument to True.
Oh, I see. I was thinking something different.
@qfgaohao Do you have any training logs for MobileNet?
I am able to fix the above issue, but now I am getting another issue. Loss is not decreasing. Total loss is fixed at around 12. Any idea what might go wrong?
P.S.: I am using PyTorch weights and not Caffe weights.
Hi, @sacmehta, how do you fix the inf problem, I am lost in it.
Make sure the feature map size used for prior generation is the same as feature maps from CNN used for SSD
@sacmehta ,thanks a lot. I have solved the problem, because my training data has very small boxes, so the smoothed l1 loss(log(0)=-inf) become -Inf.
@sacmehta I have the same issues with you, not only validation loss, sometimes the training loss occurs inf of Average Loss, Average Regression Loss, but the classification loss continues to decline, how do you solve it? Thank you
Well, I rewrote most of the SSD code from here:
https://github.com/amdegroot/ssd.pytorch
You can find my implementation here and see if it helps:
Thanks for your reply, could you tell me what caused the inf loss? I have modified network structure , So I want to be able to locate errors in my code.
I don’t remember it. It’s been a while. You can try to plug-in your model in my codebase and see if that helps.
Hi,
I am trying to reproduce your results, but validation regression loss is infinte. I have tried different learning rate regimes, but didn't have any luck. Would be great if you can provide some insights into this issue? Thanks.
Here is the output:
2018-12-01 12:38:16,778 - root - INFO - Epoch: 0, Step: 100, Average Loss: 12.1986, Average Regression Loss 2.7535, Average Classification Loss: 9.4451 2018-12-01 12:38:34,135 - root - INFO - Epoch: 0, Step: 200, Average Loss: 7.7354, Average Regression Loss 2.4653, Average Classification Loss: 5.2701 2018-12-01 12:38:51,741 - root - INFO - Epoch: 0, Step: 300, Average Loss: 7.1205, Average Regression Loss 2.2209, Average Classification Loss: 4.8996 2018-12-01 12:39:10,253 - root - INFO - Epoch: 0, Step: 400, Average Loss: 6.8956, Average Regression Loss 2.1017, Average Classification Loss: 4.7939 2018-12-01 12:39:27,837 - root - INFO - Epoch: 0, Step: 500, Average Loss: 6.6482, Average Regression Loss 1.9754, Average Classification Loss: 4.6728 2018-12-01 12:39:45,364 - root - INFO - Epoch: 0, Step: 600, Average Loss: 6.5128, Average Regression Loss 1.8923, Average Classification Loss: 4.6204 2018-12-01 12:40:18,564 - root - INFO - Epoch: 0, Validation Loss: inf, Validation Regression Loss inf, Validation Classification Loss: 10.0192