voldemortX / pytorch-auto-drive

PytorchAutoDrive: Segmentation models (ERFNet, ENet, DeepLab, FCN...) and Lane detection models (SCNN, RESA, LSTR, LaneATT, BézierLaneNet...) based on PyTorch with fast training, visualization, benchmarking & deployment help
BSD 3-Clause "New" or "Revised" License
847 stars 138 forks source link

training loss is Nan #49

Closed daigang896 closed 2 years ago

daigang896 commented 2 years ago

Hello, when training loss is Nan, what should I do?

voldemortX commented 2 years ago

@daigang896 Hi! What is your exact training script? As far as I know, by this repo's default lr settings, only LSTR sometimes produce NaN. SCNN/RESA only does that when lr is too high.

daigang896 commented 2 years ago

train script:python main_landec.py --epochs=200 --lr=0.15 --batch-size=16 --dataset=tusimple --method=scnn --backbone=vgg16 --mixed-precision --exp-name=vgg16_scnn_tusimple. However, the data used is not tusimple, but the format is tusimple format. I'll set the learning rate smaller first and then look at it.

daigang896 commented 2 years ago

@voldemortX

voldemortX commented 2 years ago

@daigang896 FYI, segmentation method's learning rate should be adjusted according to the total number of pixels in a batch (i.e., not only batch size, but also training resolution). The relationship is mostly linear or sqrt. Unless the exploded loss is the existence loss. Other than smaller learning rates, sometimes longer warmup can bring better performance. While in rare cases simply re-run the experiment is enough (the VGG-SCNN does have a small failure rate in training).

Note that other than typical gradient explosion caused by large learning rates, irregular labels (labels with NaN value for instance) can also cause this issue.

daigang896 commented 2 years ago

@voldemortX OK, I see. Thank you very much. I'll check it carefully.

daigang896 commented 2 years ago

@voldemortX

I checked the data carefully, and there was no Nan in the data. However, in the middle of training, the training loss is Nan. The data format is tusimple format. How should I check this situation?

The training is carried out: python main landec. py --epochs=240 --lr=0.12 --batch-size=16 --dataset=tusimple --method=scnn --backbone=erfnet --mixed-precision --exp-name=erfnet scnn_ tusimple.

voldemortX commented 2 years ago

@daigang896 What is the size of your dataset? 240 epochs seem long. Theoretically, learning rate decreases in proportion to the training length, you'll have higher lr in early stages if epochs are set longer thus easier to explode. You might consider a longer warmup by --warmup-steps.

Or just try lr=0.01 and see if it still produces NaN. For sanity checks, remove --mixed-precision.

daigang896 commented 2 years ago

@voldemortX OK, thank you. I'll try it according to your suggestion.

voldemortX commented 2 years ago

@daigang896 It seems the problem is somewhat resolved and this issue happened a long time ago. And we never encounter a similar issue when refactoring the whole codebase, so it is probably not a bug. If you still can't make it work with the new master branch, feel free to reopen!