-
Hi, when doing the data augment, you normalized the depth to -1~1 by diving bbox_3d_shape[0] as the "**joint_img[i, 2] /= (cfg.bbox_3d_shape[0]/2.) # expect depth lies in -bbox_3d_shape[0]/2 ~ bbox_3d…
-
Hi,
I'm not actually sure of the maximum number of tracked bodies (is this known?) but in certain situations there may be a crowd of people watching a single person interact and I can see situation…
-
I download the annotation file about MUPOTS dataset from your link, the 2D coordination is in keypoints_img, and the corresponding x and y of 3D is the same as keypoints_img, the z comes from keypoint…
gh18l updated
5 years ago
-
dear @una-dinosauria ,
Can you please me how to test a video from youtube or test my picture after finishing training?
-
My goal is to finetune the model published on my 2d pose dataset.
As a starting point I trying to create the simplest training script without Human3.6:
When running .
`python -m src.main --pretr…
-
Hello,
I would like to understand how to read the .csv dataset files for the JHMDB dataset. What it the format? Lines, columns signification?
How was those files created from the .mat joints positi…
-
Hi thank you for your work guys, do you gonna open source the DetectNet part ?
Regards
-
Hello
I have a few following questions, while playing around with your implementation.
Could you tell ,where the 2D keypoints are stored and the `data type` for the 2D detector `hrnet` (which varia…
-
@CHUNYUWANG Hi, thanks for providing pre-trained model. I've a question that from where we can get corresponding heatmaps (predicted_heatmaps.h5) file? Kindly also upload heatmaps file and share the …
-
arXiv论文跟踪