Closed BasharSu closed 2 years ago
I can second this, first tried with custom dataset had this issue. To confirm, I downloaded the coco dataset and trained with default settings and got same error.
It is a bug in CIoU loss calculation, will fix it.
Same bug I met, hope fixed soon.
It is a bug in CIoU loss calculation, will fix it. Hi,has this problem been fixed?
I have met the same bug:
hope it will be fixed soon~ many thanks
It is a bug in CIoU loss calculation, will fix it.
Give me some hint about how to fix it, may be I could help on this cause I'm just working on it for few days.
any progress?
有什么进展吗
Facing the same issue but within pose model training.
I tried training a custom dataset on yolov7-w6 (after adjusting the number of classes). But I keep getting NaN values for the box and total loss values. The Validation P, R, and mAP values are rising with each epoch as expected, so I think it's a UI issue.
Input command: python train_aux.py --device 0 --batch-size 8 --data data/custom.yaml --img 1280 1280 --cfg yolov7-w6.yaml --weights '' --name w6-test --hyp data/hyp.scratch.p6.yaml --epochs 150
Output example: box obj cls total labels img_size nan 0.007422 0 nan 22 1280