-
## Summary
1. Is it possible that estimating point light position?
2. Is the approach I described valid?
## Description
Hi, I'd like to ask a question about estimating point light's position…
-
If I want to train the model on another dataset, what data should I provide? Both the original pose bvh and target pose bvh? If so, should they be paired? i.e. If I want to transform poses from A to …
-
Hello,
So I ran the MocapNET2CSV (from JSON files exported by Openpose). It worked and I had the out.bvh file. But the output animation is not completely right compared to the Openpose preview outp…
-
If we use our own video dataset, or directly use openpose to extract the 2D joint positions to generate a json format file, can we replace the original json file and directly perform style transfer? H…
-
When attempting to run convertOpenPoseJSONToCSV using a copy of the output folder from OpenPose that contains 273 000000000xxx_keypoints.json files the following is output:
```
tm@tm-VirtualBox:~/…
-
Hello Carlos, thanks again for your patience,
this is what happens when I import the data:
![1](https://user-images.githubusercontent.com/47851332/113481235-a343c700-9498-11eb-97dc-1d4967ac425f.PNG)…
Kyoin updated
3 years ago
-
i tried to train the character using the hyperparams given by @ManifoldFR in #3076 .
However, after 60 millions steps the character averages a reward of ~300/350 and when I test it the character w…
-
Hi @Shimingyi I'm facing problem to testing this wonderful repository.
**1: By using h36m_gt_t.pth**
When I run the `evaluate.py` then BVH file are saved but nothing is there in BVH files. There …
-
If i run following command,
`./MocapNET2LiveWebcamDemo --novisualization --from shuffle.webm --ik 0.01 15 40`
if gives me following error:
Visualization disabled
`Incorrect number of arguments,…
-
hello! i've cloned the repo, downloaded the pretrained model and placed it in a new folder- 'checkpoints', the training data and placed it in the already existing folder 'data', but when running the f…