Open Victcode opened 3 years ago
bumping this, would love to have an example of how to run this in the wild when you have a chance! I've tried many different iterations, but I can't seem to get the pretrained models to work, as the output 3d points just seem random. Can you confirm if the format of the input 2d must be gt h36m keypoints or COCO keypoints?
Realized my issue was likely either not having the environment set up properly, or trying to run it on a CPU instead of GPU
@jimmybuffi
same problem ocuured , I guess VideoPose3D need coco keypoints which is diffrent from h36m keypoints order. have you solved problem ?
@vicentowang using their gt81f model, the input required was h36m key points format. I'm not exactly sure what my issue was originally, but when I set up the environment exactly as they specified on a GPU machine using h36m key points format, it worked and the issue was resolved.
@jimmybuffi how to do with h36m key points format , I use detectron2 according to https://github.com/facebookresearch/VideoPose3D/blob/main/INFERENCE.md, that is coco format, so i get wrong result.
cd inference python infer_video_d2.py \ --cfg COCO-Keypoints/keypoint_rcnn_R_101_FPN_3x.yaml \ --output-dir output_directory \ --image-ext mp4 \ input_directory
I used this script from this repo to do the conversion...
@jimmybuffi
I try it , but got the wrong result. what about your experiment, thanks anyway.
I used this script from this repo to do the conversion...
@jimmybuffi Can you specify how excatly should i do to make poseformer inference in the wild run? which code should i modify? thanks very much !
Thanks for your great work. I use the code of VideoPose3D to run wild video. I got wrong result. Do I need to make changes to the original code(VideoPose3D)?