hikvision-research / opera

A Unified Toolbox for Object Perception & Application
Apache License 2.0
139 stars 12 forks source link

training, testing and demo testing of the petr multi-person pose model #6

Open Riverzxz opened 1 year ago

Riverzxz commented 1 year ago

Thanks for your work, I am in the process of using petr. How to start the training, testing and demo testing of the petr multi-person pose model? Can the steps be given? Looking forward to your reply and answer

dae-sun commented 1 year ago

Evaluation bash tools/dist_test.sh $CONFIG $CHECKPOINT $NUM_GPU --eval keypoints ex) CUDA_VISIBLE_DEVICES=1,2 bash tools/dist_test.sh configs/petr/petr_r50_16x2_100e_coco.py checkpoint/petr_r50_16x2_100e_coco.pth 2 --eval keypoints

Training bash tools/dist_train.sh $CONFIG $NUM_GPU ex) CUDA_VISIBLE_DEVICES=1,2 bash tools/dist_train.sh configs/petr/petr_r50_16x2_100e_coco.py 2

Inference ex) python tools/test.py configs/petr/petr_r50_16x2_100e_coco.py checkpoint/petr_r50_16x2_100e_coco.pth --show-dir ./results

Riverzxz commented 1 year ago

Sorry to ask you again.I have an error Traceback (most recent call last): File "tools/test.py", line 251, in main() File "tools/test.py", line 190, in main dataset = build_dataset(cfg.data.test) File "/home/hjq/opera-main/opera/datasets/builder.py", line 83, in build_dataset dataset = build_from_cfg(cfg, DATASETS, default_args) File "/home/hjq/.conda/envs/open-mmlab/lib/python3.7/site-packages/mmcv/utils/registry.py", line 72, in build_from_cfg raise type(e)(f'{obj_cls.name}: {e}') FileNotFoundError: CocoPoseDataset: [Errno 2] No such file or directory: '/dataset/public/coco/annotations/person_keypoints_val2017.json'

dae-sun commented 1 year ago

Did you download coco datasets and annotation files? you should download annotation files and images from [https://cocodataset.org/#download]. and fix data root on 'data_root = '/dataset/public/coco/', which is in [configs/base/datasets/coco_keypoint.py].

Riverzxz commented 1 year ago

Thank you for your help! !I am currently doing an Inference of the coco dataset. If I want to use a model to visually test a video, such as the dekr (cvpr2021) model: // python tools/inference_demo.py --cfg experiments/coco/inference_demo_coco.yaml \ --videoFile ../multi_people.mp4 \ --outputDir output \ --visthre 0.3 \ TEST.MODEL_FILE model/pose_coco/pose_dekr_hrnetw32.pth // What do I need to do in petr model?

dae-sun commented 1 year ago

I couldn't find the video inference code in this repository.. I recommend converting mp4 videos into png frames for inference... or fix codes for video. I am sorry that I cannot help you.

STRUGGLE1999 commented 1 year ago

I use following code to evaluate the PETR ,but I have an error.

code:CUDA_VISIBLE_DEVICES=1 bash tools/dist_test.sh configs/petr/petr_r50_16x2_100e_coco.py checkpoint/petr_r50_16x2_100e_coco.pth 1 --show-dir /output --eval keypoints

could you please help me? image

Lukas-Ma1 commented 7 months ago

It seems like torch vision's problem, please use following configuration: python 3.7 + torch 1.8 + cuda 10.1 + mmcv 1.5.3 + mmdet 2.25