-
Hello, thanks for your code. Can you provide the code for this chanllenge or method in detail?
Best
Weiguo Zhou
-
I'm appreciating your great work.
For my own human pose dataset, in order to train the my own model:
1) How to generate the center and bounding box for each depth image?
2) How to generate the me…
-
Thank you for the great project.
Here's an embarrassing question, I don't have a DepthA camera, but I do have Azure Kinect
So I want to change the depth image interface into Kinect. Can you give me…
-
### Issue Summary
There are lot of people asking questions about the depth info of the nodes. I am sorry for adding another one on top of the pile.
I use openpose with realsense SR300 camera. R…
-
Dear Author:
In your AAAI 2020 paper, you said "we first train a small separated 2D CNN to attain hand center and extract hand regions from depth images".
During our test, there are s…
-
Hello, I have a question about shuffle & docom arguments.
Code's all shuffle and docom are set False.
I want to know the reasons for setting the two arg.
And.. Is the docom option related to …
-
Hi, Thanks again for sharing your code. I was going through your codebase and got confused on the following. Could share your view on those?
1. In the NYU train_and_test.py script, what is the purp…
-
In V2V github website "https://github.com/mks0601/V2V-PoseNet_RELEASE". I find the functions pixel2world and world2pixel are implemented differently for different datasets.
In dataset MSRA: world2pi…
-
Hi @zhangboshen .
I'm trying to train A2J model on ITOP dataset. Since the bounding box file for the training set is not provided, could you please tell me how to obtain bounding boxes? Many thanks!
-
Hi, Thanks for the implementation of V2V-Posenet in Pytorch. It's really helpful in understanding the authors' work. I'm thinking of using your code in a project and wonder if it's open-source. If it …