youngwanLEE / CenterMask

[CVPR 2020] CenterMask : Real-Time Anchor-Free Instance Segmentation
https://arxiv.org/abs/1911.06667
Other
770 stars 124 forks source link

loss nan #17

Closed enhany closed 4 years ago

enhany commented 4 years ago

When I try to train on COCO 2014 dataset, I get loss nan, but not always, sometimes it's going ok.

cmd line:

python tools/train_net.py --config-file "configs/centermask/centermask_V_39_eSE_FPN_lite_res600_ms_bs16_4x.yaml" SOLVER.IMS_PER_BATCH 4 SOLVER.TEST_PERIOD 1000 DATASETS.TRAIN "('coco_2014_train',)" DATASETS.TEST "('coco_2014_minival',)"

env:

OS: Microsoft Windows 10 Pro
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.0.130
[pip3] numpy==1.18.1
[pip3] torch==1.1.0
[pip3] torchvision==0.3.0
        Pillow (6.1.0)

Bad results from beginning:

2020-02-09 23:27:24,918 maskrcnn_benchmark.trainer INFO: eta: 7 days, 22:00:10  iter: 20  loss: nan (nan)  loss_mask: nan (nan)  loss_maskiou: nan (nan)  loss_cls: 21.4419 (18.3350)  loss_reg: nan (nan)  loss_centerness: nan (nan)  time: 0.3928 (1.9001)  data: 0.0029 (1.4490)  lr: 0.003333  max mem: 4277
2020-02-09 23:37:35,529 maskrcnn_benchmark.trainer INFO: eta: 8 days, 1:01:12  iter: 20  loss: 23542148363337776622290525596155904.0000 (nan)  loss_mask: 0.6979 (424800877876977736876032.0000)  loss_maskiou: 23542148363337776622290525596155904.0000 (nan)  loss_cls: 1.1945 (11.2513)  loss_reg: 0.9993 (nan)  loss_centerness: 0.7402 (nan)  time: 0.4166 (1.9303)  data: 0.0025 (1.4344)  lr: 0.003333  max mem: 4481
2020-02-09 23:38:44,647 maskrcnn_benchmark.trainer INFO: eta: 7 days, 21:55:19  iter: 20  loss: nan (nan)  loss_mask: 1.5184 (6998281.7163)  loss_maskiou: nan (nan)  loss_cls: 21.3143 (16.3219)  loss_reg: nan (nan)  loss_centerness: nan (nan)  time: 0.3993 (1.8993)  data: 0.0030 (1.4402)  lr: 0.003333  max mem: 4468
2020-02-09 23:42:07,362 maskrcnn_benchmark.trainer INFO: eta: 8 days, 17:18:19  iter: 20  loss: 3.6971 (nan)  loss_mask: 0.6961 (15.4345)  loss_maskiou: 0.2314 (nan)  loss_cls: 1.0910 (6.1945)  loss_reg: 0.9994 (nan)  loss_centerness: 0.6840 (nan)  time: 0.6344 (2.0932)  data: 0.0040 (1.4359)  lr: 0.003333  max mem: 4492
2020-02-09 23:42:17,270 maskrcnn_benchmark.trainer INFO: eta: 5 days, 9:24:52  iter: 40  loss: nan (nan)  loss_mask: 1.8518 (8.6920)  loss_maskiou: nan (nan)  loss_cls: 21.4820 (13.8231)  loss_reg: nan (nan)  loss_centerness: nan (nan)  time: 0.4881 (1.2943)  data: 0.0045 (0.7206)  lr: 0.003333  max mem: 4492
2020-02-09 23:52:21,182 maskrcnn_benchmark.trainer INFO: eta: 8 days, 7:18:50  iter: 20  loss: nan (nan)  loss_mask: nan (nan)  loss_maskiou: nan (nan)  loss_cls: 21.3143 (16.3264)  loss_reg: nan (nan)  loss_centerness: nan (nan)  time: 0.5104 (1.9933)  data: 0.0035 (1.4069)  lr: 0.003333  max mem: 4513
2020-02-09 23:52:31,129 maskrcnn_benchmark.trainer INFO: eta: 5 days, 4:30:52  iter: 40  loss: nan (nan)  loss_mask: nan (nan)  loss_maskiou: nan (nan)  loss_cls: 21.3671 (18.8594)  loss_reg: nan (nan)  loss_centerness: nan (nan)  time: 0.4871 (1.2453)  data: 0.0040 (0.7057)  lr: 0.003333  max mem: 4513
2020-02-09 23:53:46,776 maskrcnn_benchmark.trainer INFO: eta: 8 days, 5:16:40  iter: 20  loss: nan (nan)  loss_mask: 1675440662589417990389760.0000 (2184793125528002094956544.0000)  loss_maskiou: nan (nan)  loss_cls: 21.3885 (17.3715)  loss_reg: nan (nan)  loss_centerness: nan (nan)  time: 0.4871 (1.9729)  data: 0.0039 (1.4136)  lr: 0.003333  max mem: 4217

Ok result:

2020-02-09 23:44:14,344 maskrcnn_benchmark.trainer INFO: eta: 8 days, 17:40:27  iter: 20  loss: 3.5707 (3.6511)  loss_mask: 0.7062 (0.7645)  loss_maskiou: 0.0637 (0.1216)  loss_cls: 1.0763 (1.0718)  loss_reg: 0.9982 (0.9978)  loss_centerness: 0.6932 (0.6954)  time: 0.6289 (2.0969)  data: 0.0035 (1.4190)  lr: 0.003333  max mem: 4548
2020-02-09 23:44:27,658 maskrcnn_benchmark.trainer INFO: eta: 5 days, 18:06:41  iter: 40  loss: 3.3980 (3.5286)  loss_mask: 0.6923 (0.7301)  loss_maskiou: 0.0382 (0.0867)  loss_cls: 0.9761 (1.0231)  loss_reg: 0.9968 (0.9970)  loss_centerness: 0.6855 (0.6916)  time: 0.6582 (1.3813)  data: 0.0040 (0.7119)  lr: 0.003333  max mem: 4578
2020-02-09 23:44:40,926 maskrcnn_benchmark.trainer INFO: eta: 4 days, 18:10:50  iter: 60  loss: 3.3664 (3.4819)  loss_mask: 0.6919 (0.7175)  loss_maskiou: 0.0355 (0.0710)  loss_cls: 0.9938 (1.0140)  loss_reg: 0.9918 (0.9940)  loss_centerness: 0.6751 (0.6854)  time: 0.6557 (1.1420)  data: 0.0040 (0.4762)  lr: 0.003333  max mem: 4728
2020-02-09 23:44:54,655 maskrcnn_benchmark.trainer INFO: eta: 4 days, 6:47:16  iter: 80  loss: 3.2235 (3.4085)  loss_mask: 0.6806 (0.7088)  loss_maskiou: 0.0246 (0.0602)  loss_cls: 0.9886 (1.0100)  loss_reg: 0.8146 (0.9449)  loss_centerness: 0.6809 (0.6846)  time: 0.6860 (1.0281)  data: 0.0049 (0.3585)  lr: 0.003333  max mem: 4728
2020-02-09 23:45:08,560 maskrcnn_benchmark.trainer INFO: eta: 4 days, 0:07:35  iter: 100  loss: 3.0076 (3.3381)  loss_mask: 0.6819 (0.7048)  loss_maskiou: 0.0167 (0.0518)  loss_cls: 0.9510 (1.0000)  loss_reg: 0.6715 (0.8965)  loss_centerness: 0.6853 (0.6849)  time: 0.6875 (0.9615)  data: 0.0045 (0.2879)  lr: 0.003333  max mem: 4728
youngwanLEE commented 4 years ago

@enhany

I also have often experienced this phenomenon in maskrcnn-benchmark or fcos.

I guess that this is random initialization of the weight except for backbone.

enhany commented 4 years ago

Changing LR to lower value helps (10-100 times lower).

zimenglan-sysu-512 commented 4 years ago

@enhany i meet the NaN problem. if lower the lr to 10~100 times, will it hurt the performance?

enhany commented 4 years ago

@zimenglan-sysu-512 you need to upper your MAX_ITER 10-100 times. So yes, it will take more time to train.