-
I'm trying to use Isaac Gym for multi-agent reinforcement learning (MARL), is there any future plan on this?
If not, can you give any suggestions on how should I integrate MARL algorithms into Iss…
reso1 updated
2 years ago
-
Hi, thanks for the amazing work!
I am wondering how important the value normalization is? When I disable the value normalization in some tasks, especially the ShadowHand, the PPO agent doesn't work…
-
Robots learn to abuse the physics engine and move forward in nonrealistic ways.
-
-
Hi,
I was trying to use zoo for tuning my PPO hyperparams, I can't get my vec env to work with zoo. Because I use an environment with multiple agents inside, so there are multiple rewards and actio…
-
Hi @Toni-SM ,
![Screenshot from 2022-05-24 10-36-00](https://user-images.githubusercontent.com/53815515/169987903-b25e6a74-f7d3-4c5b-99cc-2b8bd0828b4c.png)
Above you can see the training result …
-
The resulting plot does not look good. Investigate.
-
Using the new Python bindings and URDF, how do I create a robot and a static environment, such as the ground and some walls? I can successfully load a robot using URDF.
```
...
```
…
-
1. in train/config.py it calls function self.get_if_off_policy(), but actually the function name is if_off_policy()
2. in train/config.py it calls self.agent_class.__name__ , but 'Arguments' object…
-
OS Version: Ubuntu 21.04
Nvidia Driver: 495
Graphics: GTX 1660 Ti
Pytorch: PyTorch version 1.10.1+cu102
Hi tried anymal_c_flat and works fine on GTX 1660 Ti using nvidia-driver-495
When i try…