-
Thanks for your work! When I test voxelpose in Campus dataset.
I run the following command:
![image](https://user-images.githubusercontent.com/41726592/206166392-04c82ae6-b408-436d-9fb1-4daf3837473b…
-
First of all great stuff! I ran your code and it works perfectly for my images.
I'm fairly new to building NNs but I'm thinking of replacing the hourglass network with the new EfficientNet for feat…
-
how many fps in your GPU env ?
can it be real time ?
-
We keep this issue open to collect feature requests from users and hear your voice. Our monthly release plan is also available here.
You can either:
1. Suggest a new feature by leaving a comment.
…
-
Hi. I have a couple questions regarding how motion tokens are fed in in inference and training. I have an array of SMPL parameters (pose, beta, etc.).
- Do I have to convert it into a .ply file of a …
-
Hi,
I'm using the 3D lifter simplebaseline3D. When I use the 2D GT as input, I get the good results on h36m validation.
![image](https://user-images.githubusercontent.com/58964165/192491025-aab89…
-
So, when I run the fit code in batch among all test images of hico-det, it fails for `HICO_test2015_00001000.jpg` image however, if I run it separately on it, it doesn't. I am very confused by what is…
-
hi,
the link to download the pre-processed data human36m.zip does not exist, could you please upload it again?
-
Hello! I read the NeuralAnnot paper and have some questions. Can you help me?
1. NeuralAnnot takes a single-view image as input and outputs a set of MANO parameters. Thus for a single hand pose in …
-
Hi, It is a great work, and here I used the mediaPipe to instead the Yolov3+HrNet for testing the model in the wild video, the effect is pretty good with quicker speed in processing each frame from 13…