-
### Describe the bug
When a DirectRLEnv is truncated by having a max_episode_length property set in gym.register,
Isaac Lab will crash when a truncate happens. The root cause being a single "Tr…
-
**Is your feature request related to a problem? Please describe.**
First off, I would like to thank you for building and maintaining an amazing project! One feature, I would be interested in adding/c…
-
Thanks for the great work and sharing it.
As a beginner, I have read the items and folders related to HighwayEnv, and I understood it to a large extent according to the document, but I have some doub…
-
Hi,
We tried to work on the same scenarios which was provided by Dominick Buse in his GitHub for the single agents. We would like to extend our work to multi-agents. Since you have already asked th…
-
Hi, I am the maintainer/creator of [Esquilax](https://github.com/zombie-einstein/esquilax) a JAX large scale sim and RL environment library.
As a first use case of the library I've been working on…
-
WIP Godot RL using c# and Torchsharp exposed as GDExtension
https://github.com/edbeeching/godot_rl_agents/issues/104#issuecomment-1637026695
https://github.com/edbeeching/godot_rl_agents/discussio…
-
I tried to run the RL training scripts for multiple tasks such as Stabilize, Reach and Grasp, and Insert by
`python3 main/rl/train.py task= sim_device=cuda: rl_device=cuda: graphics_device_id=`
Howe…
-
- [ ] Add related neighbours' actions into observation space
- [ ] Create the template class in RL Multiagent level or update the MultiAgent class itself
- [ ] Howto files and documentations
**Cross …
-
### Search before asking
- [X] I searched the [issues](https://github.com/ray-project/ray/issues) and found no similar issues.
### Ray Component
RLlib
### What happened + What you expected to hap…
-
### Feature request
I am trying to train off-line RL using decision transformer, convert to .onnx.
```
from pathlib import Path
from transformers.onnx import FeaturesManager
feature = "seq…