yuantianyuan01 / StreamMapNet

GNU General Public License v3.0
189 stars 16 forks source link

During the training phase, the gradient will gradually become large #15

Open TfeiSong opened 10 months ago

TfeiSong commented 10 months ago

During the training phase, the gradient will gradually become large, do you know what causes this ? Looking forward to your reply

1701676667228

yuantianyuan01 commented 10 months ago

What batch size do you use? It seems failed to converge at all and I suspect it's caused by very small batch size. Can you share your training config file?

TfeiSong commented 10 months ago

https://github.com/yuantianyuan01/StreamMapNet/blob/main/plugin/configs/nusc_newsplit_480_60x30_24e.py Almost the same as above, but only one gpu is used and total train sample numbers is 19291。

yuantianyuan01 commented 10 months ago

So your real batch size is 1/8 compared with using 8 GPUs and it may cause instability. You can try using lower learning rates or larger batch size to solve the problem.

TfeiSong commented 10 months ago

Thank you very much. I'll try right away