JiaquanYe / TableMASTER-mmocr

2nd solution of ICDAR 2021 Competition on Scientific Literature Parsing, Task B.
Apache License 2.0
410 stars 100 forks source link

awhen train table recoginise t fisrt step of 100,grad_norm is nan.isn't it nomal? #32

Open cqray1990 opened 2 years ago

cqray1990 commented 2 years ago

Epoch [1][100/62598] lr: 1.000e-03, eta: 6 days, 15:22:46, time: 0.539, data_time: 0.051, memory: 7239, loss_ce: 1.2757, horizon_bbox_loss: 0.2828, vertical_bbox_loss: 0.3507, loss: 1.9091, grad_norm: nan 2021-11-22 01:32:34,502 - mmocr - INFO - Epoch [1][200/62598] lr: 1.000e-03, eta: 7 days, 15:55:58, time: 0.705, data_time: 0.002, memory: 7239, loss_ce: 0.5921, horizon_bbox_loss: 0.2210, vertical_bbox_loss: 0.1055, loss: 0.9186, grad_norm: 3.9772

JiaquanYe commented 2 years ago

I think it is normal if the loss can converge. grad_norm nan is cause by mixed precision training.