-
Hi,
thanks for open-sourcing such a great research.I know the PROX recoding data contains depth and mask gt, as well as camera parameters, but other wild videos (such as my selfie video) doesn't have…
-
* Name of dataset: Human 3.6M
* URL of dataset: http://vision.imar.ro/human3.6m/description.php
* License of dataset: GRANT OF LICENSE FREE OF CHARGE FOR ACADEMIC USE ONLY
* Short description of d…
cyfra updated
5 years ago
-
Hello! Thank you for your excellent work. I would like to ask how to run the data collected by myself on human performers and extract those human surfaces. Specifically, I want to know how to prepare …
-
Hello, thank you very much for your work and published code.
I noticed at the code snippet below: https://github.com/MoyGcc/vid2avatar/blob/a1ab86a1cafc5a6e6be61bd8ef16c9c19711a415/code/lib/model/v…
-
Hi. I really like your work. Currently I tried to use this work for a clinical gait analysis. I want to get the 3D coordinates(with respect to the camera) of key joints of people.
I run your demo co…
-
Inference on images in the wild using SemGCN has been partially covered in this [thread](https://github.com/garyzhao/SemGCN/issues/2) and others, but only the overall process has been made clear. I.e.…
-
-
# Weakly-Supervised 3D Human Pse Learning via Multi-view Images in the Wild | AI 대학원생의 공부 블로그
Umar Iqbal, Pavlo Molchanov, Jan Kautz, NVIDIA et al. Proceedings of the IEEE/CVF International Conferenc…
-
Hello!You mention in your paper that you pre-train a network to realize 2D pose estimation, but it seems that this part of network was not given in your code, and the 2D estimated result was directl…
ghost updated
4 years ago
-
Thank you very much for your excellent code, but I would like to ask you a few questions:
1. What is the purpose of introducing offset?
2. In the code file, you have center point, boundary box, mean…