-
Thank you very much for sharing your amazing work, codes and weights. These are not easily accessible for many researcher in this domain.
I do have a couple questions to inquire regarding your cod…
-
Thank you for your open source!
I want to use J-HMDB-21 to train the model. However, from your link [here](http://jhmdb.is.tue.mpg.de/challenge/JHMDB/datasets), I can only get rgb-images but without …
-
Source codes for VLOG and Davis are helpful to reproduce results on the datasets.
May I get the source code for JHMDB task?
hscha updated
4 years ago
-
I am trying your normal interface
I want to see the output image for visualization, but the output of your file is pkl file and I don't know how to decode it? Please guide me.
I have read normal_inf…
-
### Issue Summary
When a joint is out of the video frame, the keypoint corresponding to this joint has zeros for its x and y coordinate in the JSON file that Openpose outputs. Is there an easy way …
-
I have train yowo on ucf101-24 and jhmdb21 datasets, the best frame_mAP I can achieve is 84.3% and
70.9% for UCF101-24 and J-HMDB-21, which are far behind the accuracy of your report. And I see foll…
yzfly updated
4 years ago
-
Hello,
I would like to understand how to read the .csv dataset files for the JHMDB dataset. What it the format? Lines, columns signification?
How was those files created from the .mat joints positi…
-
Hi
I saw that previously the best accuracy reported in your paper was 78% on J-HMDB. But now it is 77.2%. Can you explain why was this changed?
Thank you
is-cs updated
4 years ago
-
@okankop @wei-tim thanks for open sourcing the code base , have few queries
1. do we have a inference pipeline to test on few sets of images or should i use "run_video_mAP_jhmdb.sh" files
2. can i…
-
Hi!
I am trying to use this on other datasets (JHMDB, UTKinect-Action3D Dataset). Would it be possible to document the skeleton file format in the README or document another way to pass in or use o…