WongKinYiu / ScaledYOLOv4

Scaled-YOLOv4: Scaling Cross Stage Partial Network
GNU General Public License v3.0
2.01k stars 570 forks source link

why do I got all 0 result when I train scaled-yolov4-csp? Please help me, thanks! #112

Open xiaomore opened 3 years ago

xiaomore commented 3 years ago

I trained scaled-yolov4-csp using my own dataset. I changed the parameters about class or filters in coco.yaml and yolov4-csp.cfg. My train command: python train.py --device 2 --batch-size 8 --img-size 640 --epoch 50 --data coco.yaml --cfg yolov4-csp.cfg --weights 'weights/yolov4-csp.weights' --name yolov4-csp And, I scaled the label boxes in the training and validation set according to the image size parameter (640) entered in the net, and based on 640, I used kmeans clustering to generate anchors.

In addition, I modified the line 164 in general.py as x = [0,1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]. But I got all 0 result. I don't what to do next. As a newcomer in this direction, ask these questions. And then, should I scale the bbox in the annotation file(Annotation.xml) or json(train/val.json) file based on 640?

My result: image Epoch gpu_mem GIoU obj cls total targets img_size 49/49 15.1G 0.01604 0.02093 0.001517 0.0385 29 640: 100%|█████████████████████████████████████| 12 Class Images Targets P R mAP@.5 mAP@.5:.95: 100%|███████████████████████████| all 2.54e+03 1.37e+04 0.62 0.568 0.588 0.321

COCO mAP with pycocotools... saving detections_val2017__results.json... loading annotations into memory... Done (t=0.06s) creating index... index created! Loading and preparing results... DONE (t=0.83s) creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=9.11s). Accumulating evaluation results... DONE (t=2.89s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000 Optimizer stripped from runs/exp62_yolov4-csp/weights/last_yolov4-csp.pt, 210.8MB

Please help me, very thanks!!

WongKinYiu commented 3 years ago

for using pycocotools, you have to generate the gt.json in coco format for your validation data and modify the file name in test.py. https://github.com/WongKinYiu/ScaledYOLOv4/blob/yolov4-csp/test.py#L245

engrjav commented 2 years ago

@xiaomore were you able to solve it? I am having good AP on my custom datasets but when i want results in coco format i get very less AP Do you know how to modify line 245 as line 245 is print(pf % ('all', seen, nt.sum(), mp, mr, map50, map))

@WongKinYiu can you please guide