-
Inspiration: https://www.youtube.com/watch?v=X5Z7ZJ39zAA (of particular interest, the mocap reconstruction demonstration starting at around 2:41 )
That video is from many years ago, I imagine the t…
-
Hi, I am trying to reproduce the experiment from paper "Learning human behaviors from motion capture by adversarial imitation". I used the code from https://github.com/huiwenzhang/merel-mocap-gail. Bu…
-
hello:
I notice that the all configurations in the args file are not include the tasks metioned in the paper. How can I train a new policy with a task (how should I configure/wirte the arg fi…
-
Hi and thanks again for your amazing work.
Since the provided code exclusively covers imitation, I am trying to implement myself the humanoid walking policy with directional input.
To do so, I a…
-
Hi,
I am trying to train a PHC with my data. But I don't want to train a brand new PHC but based on a pre-trained PHC. The following are the training steps designed based on my understanding:
1. I…
-
Hi dear developers :)
I'm new user of ROS...
Is there any tutorial for simulating of NAO in gazebo and control of that using nao_virtual package?
I make this package successfully, but I don't know ho…
-
Hi,
**1. Unsuccessfully replay trajectory in task "StackCube_v1"**
I tried to replay the trajectory of PushCube-v1 but failed.
```
python -m mani_skill.trajectory.replay_trajectory \
--traj-…
-
Hi, I added some complex scene meshes to the simulated environment. Then, the GPU memory is not enough.
Is there any way to reduce the memory PHC needs? For example, could I delete some unnecessary s…
-
Hi All,
I am trying to run PPO on a custom PyBullet environment (very similar to MinitaurBulletEnv-v0) and am receiving an error when obtaining the first samples:
```
FutureWarning: pickle su…
-
**Proceedings**
https://papers.nips.cc/book/advances-in-neural-information-processing-systems-30-2017
https://github.com/catpanda/NIPS_2017
**PaperLists (#Papers 679)**
https://www.dropbox.com/s…