-
Hi,you do such great work!
I find two links released of you.
[SMPL from SMPLify-x](https://drive.google.com/drive/folders/12fCumEgs9PXT-dAaOGq0EDpl9dGKKorF)
[NeuralAnnot](https://drive.google.com/d…
-
**Is your feature request related to a problem? Please describe.**
关于RegDA的部分,human3.6m 数据集的使用,#142 中的链接已经失效
**Describe the solution you'd like**
请问能否提供预处理的代码?因为Human3.6m这个数据集比较复杂,包含多个压缩包。鄙人不清楚S1…
-
I reproduce your table 1 with , but get this data , is there something wrong?
PA-MPJPE: 61.4 MPJPE: 105.8 PVE: 123.0 Accel_error: 20.3
while in your final result is:
PA-MPJPE: 53.1 MPJPE: 85.7…
-
Hi, the cmuwalker example from ipythonnotebook I got error on my side:
```
dm_control viewer intercepted an environment error.
Original message: operands could not be broadcast together with shap…
-
Hi
when I reproduce repr_table6_h36m,face following problem, My compute memory is 55 GB, and I close predicting vertes for it use too much momery.
```
InstaVariety number of dataset objects 13043…
-
@matteorr where can i find the data?
# Mturk Data:
We list the links to download the crowd collected relative depth annotations and the merged version which we release as the Relative Depth LSP Dat…
-
Hi. I am running your code aiming at lifting the 2d pose to 3d. However, in the pre-processing step, I found that camera parameters are in need.
Can I just use the models to lift the 2d pose to the …
-
I tried the following code and the visualization results show that the global rotation of the ground truth mosh data doesn't correspond to the image. Did I get something wrong or the data is incorrect…
-
Hi. I found that the 3d dataset contains the extrinsic parameters of each camera. But where is the root of the world coordinate as for the human36m, 3dpw and hp3d dataset respectively?
-
train.log is following
```
InstaVariety number of dataset objects 2086896
3DPW Dataset overlap ratio: 0.9375
Loaded 3dpw dataset from data/preprocessed_data/3dpw_train_occ_db.pt
is_train: True
…