dumyy / handpose

CrossInfoNet of CVPR 2019 for hand pose estimation
128 stars 26 forks source link

Code_workflow #12

Closed ibrahimrazu closed 4 years ago

ibrahimrazu commented 5 years ago

Hi, Thanks again for sharing your code. I was going through your codebase and got confused on the following. Could share your view on those?

  1. In the NYU train_and_test.py script, what is the purpose of declaring cubes(250,250,250)? Are those denoting correspondence to depth map?
  2. In the same script, where did you compute initial feature extraction value, T? Was it Basenet2 in Basemodel module? Because, I'm planning to use the block for 48 by 48 input along with 96 by 96 to concatenate T feature maps.
MedlarTea commented 5 years ago

Hi, Md Ibrahim Khalil! It looks like you you have successfully implemented the code. I'm a novice. I'm confused abount the code in line 999 in /directory/data/importers.py. The train_NYU.txt file do not exit in NYUdataset/train/. How can I get it? Thanks a lot!!!

ibrahimrazu commented 5 years ago

Hi Medlar, they’ve used v2v posenet pre-computed centers. So you just need to copy and put those as txt files into train and test subfolders. Detailed description is given here: cache/NYU/readme.md

MedlarTea commented 5 years ago

Oh! Thanks Thanks Thanks!!!

dumyy commented 5 years ago
  1. cude is a bounding box to crop the hand area in the 3D space. we use the box(250mm,250mm,250mm) when testing.
  2. initial featue T: hand3map = global_fms[-3] feature size (12, 12), in the line 62 of basemodel
ibrahimrazu commented 5 years ago

Thanks a lot 🙂