-
Hi,
I wanted to apply PoseNet to test my own dataset. However, I found that the result from demo.py and test.py are very different.
In demo.py, each time, we pass a single processed image that o…
-
![image](https://user-images.githubusercontent.com/49524647/123750955-bde7d980-d8e9-11eb-95c2-00b69fc41a3d.png)
Why do I need to load 2D MPII dataset when training the 3D Human36M dataset?
-
Hi,
I noticed something peculiar in the augmentation part and hence, I'd be graetful if you could clarify this for me. Basically, in your dataset class (for example in Human36M/dataset.py), when yo…
-
Hello. Will 3dpw and hp3d dataset be supported for 3d human pose training? I only saw the human36m dataset
-
First of all, thank you for releasing the SMPL parameters of Human3.6M datasets.
If it doesn't bother you, would you please share the 2D keypoints information that was used for the fitting procedu…
-
Hello. I saw that the 3d pose and mesh training are needed to specify the argument "--no-validation". I wonder whether the evaluation code of 3d tasks will be released? For example, Protocol 1-3 for h…
-
Thanks your work.
I can't find the link to download the bbox_human36m_output.json in README and bbox_mupots_output.json.
can you give me some suggest?
-
Hi. i saw about human36m_batch_128.pkl.
but i don't understand some points.
1. when i read pkl file, ['images'] shape are (128, 4, 384, 384, 3). what is mean about 4?? num of cameras??
2. ['d…
-
Thank you for your excellent work.
'Sample_20_test_Human36M_smpl' is required in the project but only 'Sample_64_test_Human36M_protocol_2' found in the given link. Also the given file doesn't includ…
-
Hi! thank you for amazing contribution.
I wonder why do you use islice() during training.
https://github.com/karfly/learnable-triangulation-pytorch/blob/9d1a26ea893a513bdff55f30ecbfd2ca8217bf5d/tr…