longzw1997 / Open-GroundingDino

This is the third party implementation of the paper Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection.
MIT License
452 stars 70 forks source link

Loss inf problem occurs when training tiny model, but base and large are normal #49

Closed QingpengNong closed 11 months ago

QingpengNong commented 11 months ago

model:grounding_dino weight: groundingdino_swint_ogc.pth backbone:swin_T_224_1k train env:8v10032G config:cfg_odvg.py dataset:obejct365

log: Epoch: [0] [ 0/41483] eta: 14 days, 20:50:49 lr: 0.000100 loss: 32.9808 (32.9808) loss_bbox: 1.8663 (1.8663) loss_bbox_0: 2.4479 (2.4479) loss_bbox_1: 2.6284 (2.6284) loss_bbox_2: 1.6473 (1.6473) loss_bbox_3: 1.6328 (1.6328) loss_bbox_4: 1.8657 (1.8657) loss_bbox_interm: 2.6962 (2.6962) loss_ce: 0.7780 (0.7780) loss_ce_0: 2.6423 (2.6423) loss_ce_1: 2.4399 (2.4399) loss_ce_2: 2.5211 (2.5211) loss_ce_3: 2.4680 (2.4680) loss_ce_4: 2.5602 (2.5602) loss_ce_interm: 2.1035 (2.1035) loss_giou: 0.3789 (0.3789) loss_giou_0: 0.3840 (0.3840) loss_giou_1: 0.3836 (0.3836) loss_giou_2: 0.3773 (0.3773) loss_giou_3: 0.3759 (0.3759) loss_giou_4: 0.3781 (0.3781) loss_giou_interm: 0.4054 (0.4054) loss_bbox_unscaled: 0.3733 (0.3733) loss_bbox_0_unscaled: 0.4896 (0.4896) loss_bbox_1_unscaled: 0.5257 (0.5257) loss_bbox_2_unscaled: 0.3295 (0.3295) loss_bbox_3_unscaled: 0.3266 (0.3266) loss_bbox_4_unscaled: 0.3731 (0.3731) loss_bbox_interm_unscaled: 0.5392 (0.5392) loss_ce_unscaled: 0.3890 (0.3890) loss_ce_0_unscaled: 1.3212 (1.3212) loss_ce_1_unscaled: 1.2199 (1.2199) loss_ce_2_unscaled: 1.2605 (1.2605) loss_ce_3_unscaled: 1.2340 (1.2340) loss_ce_4_unscaled: 1.2801 (1.2801) loss_ce_interm_unscaled: 1.0518 (1.0518) loss_giou_unscaled: 0.1895 (0.1895) loss_giou_0_unscaled: 0.1920 (0.1920) loss_giou_1_unscaled: 0.1918 (0.1918) loss_giou_2_unscaled: 0.1886 (0.1886) loss_giou_3_unscaled: 0.1879 (0.1879) loss_giou_4_unscaled: 0.1891 (0.1891) loss_giou_interm_unscaled: 0.2027 (0.2027) loss_hw_unscaled: 0.2596 (0.2596) loss_hw_0_unscaled: 0.3445 (0.3445) loss_hw_1_unscaled: 0.3645 (0.3645) loss_hw_2_unscaled: 0.2324 (0.2324) loss_hw_3_unscaled: 0.2289 (0.2289) loss_hw_4_unscaled: 0.2603 (0.2603) loss_hw_interm_unscaled: 0.3786 (0.3786) loss_xy_unscaled: 0.1137 (0.1137) loss_xy_0_unscaled: 0.1451 (0.1451) loss_xy_1_unscaled: 0.1611 (0.1611) loss_xy_2_unscaled: 0.0970 (0.0970) loss_xy_3_unscaled: 0.0976 (0.0976) loss_xy_4_unscaled: 0.1128 (0.1128) loss_xy_interm_unscaled: 0.1606 (0.1606) time: 30.9681 data: 5.9649 max mem: 9660 Loss is inf, stopping training {'loss_bbox': tensor(inf, device='cuda:0'), 'loss_bbox_0': tensor(inf, device='cuda:0'), 'loss_bbox_1': tensor(inf, device='cuda:0'), 'loss_bbox_2': tensor(inf, device='cuda:0'), 'loss_bbox_3': tensor(inf, device='cuda:0'), 'loss_bbox_4': tensor(inf, device='cuda:0'), 'loss_bbox_interm': tensor(inf, device='cuda:0'), 'loss_ce': tensor(0.4675, device='cuda:0'), 'loss_ce_0': tensor(0.6388, device='cuda:0'), 'loss_ce_1': tensor(0.6114, device='cuda:0'), 'loss_ce_2': tensor(0.6029, device='cuda:0'), 'loss_ce_3': tensor(0.5809, device='cuda:0'), 'loss_ce_4': tensor(0.5935, device='cuda:0'), 'loss_ce_interm': tensor(0.6336, device='cuda:0'), 'loss_giou': tensor(0.1472, device='cuda:0'), 'loss_giou_0': tensor(0.1555, device='cuda:0'), 'loss_giou_1': tensor(0.1515, device='cuda:0'), 'loss_giou_2': tensor(0.1504, device='cuda:0'), 'loss_giou_3': tensor(0.1504, device='cuda:0'), 'loss_giou_4': tensor(0.1466, device='cuda:0'), 'loss_giou_interm': tensor(0.1693, device='cuda:0'), 'loss_hw': tensor(inf, device='cuda:0'), 'loss_hw_0': tensor(inf, device='cuda:0'), 'loss_hw_1': tensor(inf, device='cuda:0'), 'loss_hw_2': tensor(inf, device='cuda:0'), 'loss_hw_3': tensor(inf, device='cuda:0'), 'loss_hw_4': tensor(inf, device='cuda:0'), 'loss_hw_interm': tensor(inf, device='cuda:0'), 'loss_xy': tensor(inf, device='cuda:0'), 'loss_xy_0': tensor(inf, device='cuda:0'), 'loss_xy_1': tensor(inf, device='cuda:0'), 'loss_xy_2': tensor(inf, device='cuda:0'), 'loss_xy_3': tensor(inf, device='cuda:0'), 'loss_xy_4': tensor(inf, device='cuda:0'), 'loss_xy_interm': tensor(inf, device='cuda:0')} WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 1809199 closing signal SIGTERM WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 1809205 closing signal SIGTERM WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 1809211 closing signal SIGTERM WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 1809213 closing signal SIGTERM

!!! ps: if train tiny model by single V100 is normal too.

3Q first if you check this problem