hirotomusiker / CLRerNet

The official implementation of "CLRerNet: Improving Confidence of Lane Detection with LaneIoU"
Apache License 2.0
181 stars 19 forks source link

DataLoader worker killed #40

Closed tarkanozsen closed 3 months ago

tarkanozsen commented 6 months ago

Hello again, I sincerely appreciate the help so far. After minimizing external CPU usage, the training seems to have nearly finished the first epoch but got killed beforehand. I was wondering if this is also CPU related or if there's something that can be done as a fix. Also, I would like to ask whether the 15 epoch ETA being 8 days is normal, and whether you have any tips for reducing the training time. Thank you.

2024-05-22 21:07:18,021 - mmdet - INFO - workflow: [('train', 1)], max: 15 epochs 2024-05-22 21:07:18,022 - mmdet - INFO - Checkpoints will be saved to /work/work_dirs/clrernet_culane_dla34 by HardDiskBackend. 2024-05-22 21:44:11,176 - mmdet - INFO - Epoch [1][100/2321] lr: 6.000e-04, eta: 8 days, 21:23:42, time: 22.129, data_time: 0.108, memory: 7316, loss_cls: 1.4577, loss_reg_xytl: 4.2272, loss_iou: 3.1644, loss_seg: 0.7483, loss: 9.5976 2024-05-22 22:20:47,725 - mmdet - INFO - Epoch [1][200/2321] lr: 6.000e-04, eta: 8 days, 19:59:34, time: 21.966, data_time: 0.102, memory: 7316, loss_cls: 0.6684, loss_reg_xytl: 1.1171, loss_iou: 1.7489, loss_seg: 0.4764, loss: 4.0107 2024-05-22 22:57:20,463 - mmdet - INFO - Epoch [1][300/2321] lr: 5.999e-04, eta: 8 days, 18:59:48, time: 21.927, data_time: 0.046, memory: 7316, loss_cls: 0.5934, loss_reg_xytl: 0.7888, loss_iou: 1.3565, loss_seg: 0.4259, loss: 3.1646 2024-05-22 23:33:56,920 - mmdet - INFO - Epoch [1][400/2321] lr: 5.998e-04, eta: 8 days, 18:16:57, time: 21.964, data_time: 0.052, memory: 7316, loss_cls: 0.5567, loss_reg_xytl: 0.6247, loss_iou: 1.2230, loss_seg: 0.3952, loss: 2.7996 2024-05-23 00:10:29,495 - mmdet - INFO - Epoch [1][500/2321] lr: 5.997e-04, eta: 8 days, 17:32:10, time: 21.926, data_time: 0.047, memory: 7316, loss_cls: 0.5643, loss_reg_xytl: 0.5998, loss_iou: 1.1512, loss_seg: 0.3811, loss: 2.6965 2024-05-23 00:47:02,466 - mmdet - INFO - Epoch [1][600/2321] lr: 5.996e-04, eta: 8 days, 16:50:31, time: 21.930, data_time: 0.062, memory: 7316, loss_cls: 0.5108, loss_reg_xytl: 0.5260, loss_iou: 1.0538, loss_seg: 0.3522, loss: 2.4427 2024-05-23 01:23:33,552 - mmdet - INFO - Epoch [1][700/2321] lr: 5.994e-04, eta: 8 days, 16:08:47, time: 21.911, data_time: 0.047, memory: 7316, loss_cls: 0.4950, loss_reg_xytl: 0.5165, loss_iou: 1.0385, loss_seg: 0.3437, loss: 2.3938 2024-05-23 02:00:07,698 - mmdet - INFO - Epoch [1][800/2321] lr: 5.992e-04, eta: 8 days, 15:30:31, time: 21.942, data_time: 0.048, memory: 7316, loss_cls: 0.4667, loss_reg_xytl: 0.5003, loss_iou: 0.9925, loss_seg: 0.3310, loss: 2.2905 2024-05-23 02:36:42,258 - mmdet - INFO - Epoch [1][900/2321] lr: 5.990e-04, eta: 8 days, 14:52:54, time: 21.946, data_time: 0.065, memory: 7316, loss_cls: 0.4701, loss_reg_xytl: 0.5075, loss_iou: 1.0236, loss_seg: 0.3194, loss: 2.3207 2024-05-23 03:13:07,306 - mmdet - INFO - Exp name: clrernet_culane_dla34.py 2024-05-23 03:13:07,318 - mmdet - INFO - Epoch [1][1000/2321] lr: 5.988e-04, eta: 8 days, 14:10:08, time: 21.851, data_time: 0.046, memory: 7316, loss_cls: 0.4574, loss_reg_xytl: 0.4629, loss_iou: 0.9611, loss_seg: 0.3089, loss: 2.1903 2024-05-23 03:49:36,562 - mmdet - INFO - Epoch [1][1100/2321] lr: 5.985e-04, eta: 8 days, 13:30:39, time: 21.893, data_time: 0.048, memory: 7316, loss_cls: 0.4532, loss_reg_xytl: 0.4518, loss_iou: 0.9323, loss_seg: 0.2991, loss: 2.1364 2024-05-23 04:26:04,675 - mmdet - INFO - Epoch [1][1200/2321] lr: 5.982e-04, eta: 8 days, 12:51:09, time: 21.881, data_time: 0.047, memory: 7316, loss_cls: 0.4707, loss_reg_xytl: 0.4634, loss_iou: 0.9506, loss_seg: 0.3010, loss: 2.1856 2024-05-23 05:03:06,850 - mmdet - INFO - Epoch [1][1300/2321] lr: 5.979e-04, eta: 8 days, 12:26:45, time: 22.222, data_time: 0.049, memory: 7316, loss_cls: 0.4478, loss_reg_xytl: 0.4411, loss_iou: 0.9235, loss_seg: 0.2923, loss: 2.1048 2024-05-23 05:40:05,439 - mmdet - INFO - Epoch [1][1400/2321] lr: 5.976e-04, eta: 8 days, 11:59:07, time: 22.186, data_time: 0.046, memory: 7316, loss_cls: 0.4524, loss_reg_xytl: 0.4364, loss_iou: 0.9035, loss_seg: 0.2863, loss: 2.0785 2024-05-23 06:16:43,445 - mmdet - INFO - Epoch [1][1500/2321] lr: 5.973e-04, eta: 8 days, 11:22:37, time: 21.980, data_time: 0.054, memory: 7316, loss_cls: 0.4369, loss_reg_xytl: 0.4332, loss_iou: 0.8913, loss_seg: 0.2824, loss: 2.0438 2024-05-23 06:53:29,022 - mmdet - INFO - Epoch [1][1600/2321] lr: 5.969e-04, eta: 8 days, 10:48:43, time: 22.056, data_time: 0.054, memory: 7316, loss_cls: 0.4575, loss_reg_xytl: 0.4385, loss_iou: 0.8982, loss_seg: 0.2821, loss: 2.0763 2024-05-23 07:30:18,971 - mmdet - INFO - Epoch [1][1700/2321] lr: 5.965e-04, eta: 8 days, 10:15:54, time: 22.100, data_time: 0.060, memory: 7316, loss_cls: 0.4385, loss_reg_xytl: 0.4029, loss_iou: 0.8641, loss_seg: 0.2751, loss: 1.9805 2024-05-23 08:07:11,293 - mmdet - INFO - Epoch [1][1800/2321] lr: 5.961e-04, eta: 8 days, 9:43:22, time: 22.123, data_time: 0.065, memory: 7316, loss_cls: 0.4638, loss_reg_xytl: 0.4129, loss_iou: 0.8702, loss_seg: 0.2758, loss: 2.0227 2024-05-23 08:43:48,508 - mmdet - INFO - Epoch [1][1900/2321] lr: 5.956e-04, eta: 8 days, 9:06:01, time: 21.972, data_time: 0.052, memory: 7316, loss_cls: 0.4276, loss_reg_xytl: 0.4053, loss_iou: 0.8503, loss_seg: 0.2656, loss: 1.9487 2024-05-23 09:20:33,878 - mmdet - INFO - Exp name: clrernet_culane_dla34.py 2024-05-23 09:20:33,884 - mmdet - INFO - Epoch [1][2000/2321] lr: 5.951e-04, eta: 8 days, 8:30:58, time: 22.054, data_time: 0.062, memory: 7316, loss_cls: 0.4361, loss_reg_xytl: 0.4029, loss_iou: 0.8531, loss_seg: 0.2665, loss: 1.9585 2024-05-23 09:57:16,783 - mmdet - INFO - Epoch [1][2100/2321] lr: 5.946e-04, eta: 8 days, 7:55:07, time: 22.029, data_time: 0.061, memory: 7316, loss_cls: 0.4341, loss_reg_xytl: 0.3996, loss_iou: 0.8588, loss_seg: 0.2658, loss: 1.9583 2024-05-23 10:34:13,805 - mmdet - INFO - Epoch [1][2200/2321] lr: 5.941e-04, eta: 8 days, 7:22:40, time: 22.170, data_time: 0.068, memory: 7316, loss_cls: 0.4198, loss_reg_xytl: 0.4000, loss_iou: 0.8454, loss_seg: 0.2639, loss: 1.9291 2024-05-23 11:11:05,195 - mmdet - INFO - Epoch [1][2300/2321] lr: 5.936e-04, eta: 8 days, 6:48:30, time: 22.114, data_time: 0.073, memory: 7316, loss_cls: 0.4204, loss_reg_xytl: 0.3928, loss_iou: 0.8161, loss_seg: 0.2563, loss: 1.8856 Traceback (most recent call last): File "tools/train.py", line 203, in main() File "tools/train.py", line 191, in main train_detector( File "/home/docker/mmdetection/mmdet/apis/train.py", line 246, in train_detector runner.run(data_loaders, cfg.workflow) File "/home/docker/.pyenv/versions/3.8.4/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 136, in run epoch_runner(data_loaders[i], kwargs) File "/home/docker/.pyenv/versions/3.8.4/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 53, in train self.run_iter(data_batch, train_mode=True, kwargs) File "/home/docker/.pyenv/versions/3.8.4/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 31, in run_iter outputs = self.model.train_step(data_batch, self.optimizer, File "/home/docker/.pyenv/versions/3.8.4/lib/python3.8/site-packages/mmcv/parallel/data_parallel.py", line 77, in train_step return self.module.train_step(inputs[0], kwargs[0]) File "/home/docker/mmdetection/mmdet/models/detectors/base.py", line 248, in train_step losses = self(data) File "/home/docker/.pyenv/versions/3.8.4/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(input, *kwargs) File "/home/docker/.pyenv/versions/3.8.4/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 119, in new_func return old_func(args, kwargs) File "/home/docker/mmdetection/mmdet/models/detectors/base.py", line 172, in forward return self.forward_train(img, img_metas, kwargs) File "/work/libs/models/detectors/clrernet.py", line 37, in forward_train losses = self.bbox_head.forward_train(x, img_metas) File "/work/libs/models/dense_heads/clrernet_head.py", line 369, in forward_train losses = self.loss(out_dict, img_metas) File "/work/libs/models/dense_heads/clrernet_head.py", line 350, in loss tgt_masks = torch.tensor(tgt_masks).long().to(device) # (B, H, W) File "/home/docker/.pyenv/versions/3.8.4/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler _error_if_any_worker_fails() RuntimeError: DataLoader worker (pid 2539) is killed by signal: Killed. docker@9b3bd8bbf96a:/work$

hirotomusiker commented 6 months ago

Thank you, it takes less than 24 hours with a single GPU so 8 days ETA is not normal. Can you check nvidia-smi while training?

tarkanozsen commented 6 months ago

Of course, here you go:

+-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 552.44 Driver Version: 552.44 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 3060 ... WDDM | 00000000:01:00.0 Off | N/A | | N/A 63C P3 52W / 55W | 5963MiB / 6144MiB | 87% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+

hirotomusiker commented 6 months ago

Thank you. One possible reason for the slow training is the limited GPU memory (6.1GB). Could you check the ETA with adding

--cfg-options data.samples_per_gpu=16

to the train command?