Open chenscottus opened 1 year ago
is this for classic coco or wholebody? you need to pass in --nkpt 133 for wholebody.
Whole body, same issue even with --nkpt 133
have u confirmed the repo is up to date? can you try using detect.sh
? I have verified the detection script works from a clean copy. We can continue to debug if it still fails.
Yes, the repos I just downloaded today.
In yolov7-pose-whole-body-main python3 detect.py --weights yolov7-tiny-baseline.pt --source onnx_inference/img.png --save-crop --save-txt --kpt-label python3 detect.py --weights yolov7-tiny-baseline.pt --source onnx_inference/img.png --save-crop --save-txt --kpt-label --nkpt 133
This time it works, but the results show the miss-mapping issue is not fixed:
i want to say this is more of a poor prediction issue than a mapping one now. Currently training a new model, will update when done.
Hi Jack,
Thank you for showing me how to run coco2yolo.py to generate Yolo labels in Training errors #3. I trained a model with the following command: python3 train.py --data data/coco_kpts.yaml --cfg cfg/yolov7-tiny-pose-wb.yaml --weights yolov7-tiny-baseline.pt --batch-size 8 --kpt-label --device 0 --name yolov7-w6-pose --hyp data/hyp.pose.yaml --nkpt 133 --sync-bn --epochs 100
Then I use my model best.pt to predict the images under /onnx_inference, but the results don't look good. python3 detect.py --weights best.pt --source onnx_inference/ --save-crop --save-txt --kpt-label --nkpt 133
I am training a model for more epochs. Do you have any other suggestions? Thanks a lot!
Yes, under /runs/train/yolov7-w6-pose/, there are a lot of pairs of pred.jpg and labels.jpg. The labels look good. For example, this is 000000001532_labels.jpg:
I am wondering how to do inference like this...
what about your pred files?
this is my prediction result with detect.sh
. It seems the mapping issue is solved, but there is a prediction bug with the right eye + ear.
There is nothing on my pred files. For example, this is 000000001532_pred.jpg:
That is odd. Check out the new link under Pretrained Models
in README. The folder contains all my files related to the run. Only the prediction issues with the right eye/ear are left.
For training, should I use yolov7-w6-person.pt or yolov7-tiny-baseline.pt for --weights? Does the follow command look good?
python3 train.py --data data/coco_kpts_my.yaml --cfg cfg/yolov7-tiny-pose-wb.yaml --weights weights/yolov7-w6-person.pt --batch-size 8 --kpt-label --device 0 --name yolov7-w6-pose --hyp data/hyp.pose.yaml --nkpt 133 --sync-bn --epochs 300
you cnat use w6 because that is a difference arch. I used --img-size 640 for my results.
Your pred files look good. I used the following command to train: python3 train.py --data data/coco_kpts.yaml --cfg cfg/yolov7-tiny-pose-wb.yaml --weights weights/yolov7-tiny-baseline.pt --batch-size 8 --kpt-label --device 0 --name yolov7-w6-pose --hyp data/hyp.pose.yaml --nkpt 133 --sync-bn --epochs 100
Is there anything wrong? I am just increasing the number of epochs.
add the --img-size 640 flag. Are you able to recreate?
Thank you! I will add --img-size 640. Recreate what?
I just trained a model with the following command. There is still nothing on my pred files. I'll use your model for now. Please let me know if you have new models. Thanks!
python3 train.py --data data/coco_kpts_my.yaml --cfg cfg/yolov7-tiny-pose-wb.yaml --weights weights/yolov7-tiny-baseline.pt --batch-size 8 --kpt-label --device 0 --name yolov7-w6-pose --hyp data/hyp.pose.yaml --nkpt 133 --img-size 640 --sync-bn --epochs 300
Hi,
I run the detection and it fails. Here are the log:
python3 detect.py --weights yolov7_pose_whole_body_tiny_baseline.pt --source onnx_inference/img.png --save-crop Namespace(agnostic_nms=False, augment=False, classes=None, conf_thres=0.25, device='', exist_ok=False, hide_conf=False, hide_labels=False, img_size=640, iou_thres=0.45, kpt_label=False, line_thickness=3, name='exp', nkpt=17, nosave=False, project='runs/detect', save_bin=False, save_conf=False, save_crop=True, save_txt=False, save_txt_tidl=False, source='onnx_inference/img.png', update=False, view_img=False, weights=['yolov7_pose_whole_body_tiny_baseline.pt']) YOLOv5 � 2023-6-7 torch 2.0.1+cu117 CUDA:0 (NVIDIA GeForce RTX 3090 Ti, 24563.375MB)
Fusing layers... Model Summary: 314 layers, 8862259 parameters, 0 gradients /usr/local/lib/python3.8/dist-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] image 1/1 /mnt/d/Workspace2021/models/yolov7_pose_whole_body/yolov7-pose-whole-body-main/onnx_inference/img.png: tensor(0.83447, device='cuda:0') Traceback (most recent call last): File "detect.py", line 205, in
detect(opt=opt)
File "detect.py", line 102, in detect
scale_coords(img.shape[2:], det[:, 6:], im0.shape, kpt_label=kpt_label, step=3)
File "/mnt/d/Workspace2021/models/yolov7_pose_whole_body/yolov7-pose-whole-body-main/utils/general.py", line 385, in scale_coords
coords[:, [0, 2]] -= pad[0] # x padding
IndexError: index is out of bounds for dimension with size 0
Thanks,
-Scott