Open xiaofeng-c opened 2 years ago
When run the train codes,i use 2 gpus,the problem occur as follow:
num classes: 15
2021-11-09 21:32:31 epoch 20/353, processed 291080 samples, lr 0.000333
291144: nGT 155, recall 127, proposals 324, loss: x 4.715607, y 5.864799, w 4.005206, h 3.525136, conf 130.468964, cls 392.400330, class_contrast 1.089527, total 542.069580
291208: nGT 144, recall 129, proposals 356, loss: x 3.931613, y 5.137510, w 6.525736, h 2.192330, conf 89.379707, cls 200.923706, class_contrast 1.220589, total 309.311188
Traceback (most recent call last):
File "tool/train_decoupling_disturbance.py", line 403, in
In experiment, I used 2 GPUs to train. You can use more GPU to train or reduce the batch size, but it may affect the result.
OK,thank you very much!
In experiment, I used 2 GPUs to train. You can use more GPU to train or reduce the batch size, but it may affect the result.
I used 4 1080Tis and reduced the batch size from 64 to 32 during fine-tuning, the result is not as good as the result paper reported. The mAP on VOC split1 is only 0.385 and it is 0.475 in paper. @Bohao-Lee
In experiment, I used 2 GPUs to train. You can use more GPU to train or reduce the batch size, but it may affect the result.
I used 4 1080Tis and reduced the batch size from 64 to 32 during fine-tuning, the result is not as good as the result paper reported. The mAP on VOC split1 is only 0.385 and it is 0.475 in paper. @Bohao-Lee
I have not tried on the reduced batch size setting before, but I can reproduce the performance on two 3080 GPUs.
In experiment, I used 2 GPUs to train. You can use more GPU to train or reduce the batch size, but it may affect the result.
I used 4 1080Tis and reduced the batch size from 64 to 32 during fine-tuning, the result is not as good as the result paper reported. The mAP on VOC split1 is only 0.385 and it is 0.475 in paper. @Bohao-Lee
I have not tried on the reduced batch size setting before, but I can reproduce the performance on two 3080 GPUs.
Thanks, I will try again
I also encountered this error:RuntimeError: CUDA out of memory. Tried to allocate 422.00 MiB (GPU 0; 10.76 GiB total capacity; 9.72 GiB already allocated; 179.69 MiB free; 84.55 MiB cached). but I only have two 2080ti, so what should I do? Reduce the batch_size? @xiaofeng-c @Bohao-Lee
I also encountered this error:RuntimeError: CUDA out of memory. Tried to allocate 422.00 MiB (GPU 0; 10.76 GiB total capacity; 9.72 GiB already allocated; 179.69 MiB free; 84.55 MiB cached). but I only have two 2080ti, so what should I do? Reduce the batch_size? @xiaofeng-c @Bohao-Lee
Maybe reducing the batch size can help you. But it may affect performance. @Jxt5671
I use two 3080 GPUs,butI also encountered this error:RuntimeError: CUDA out of memory. Tried to allocate 422.00 MiB,Could you please tell me the CUDA, torch and python versions you use? Besides,I have also tried to reduce the batchsize to 32. There is no problem with basetrain, but there is still a CUDA out of memory problem in finetuning,So should I reduce the batch size to 16?@Bohao-Lee
Thank you for your work,when use your codes for training,i want to know how many gpus will be used?