-
If you are submitting a bug report, please fill in the following details and use the tag [bug].
### Describe the bug
I am running a custom environment in which I'd like to set a condition for ce…
-
## Describe the bug
I wasn't sure whether to open this issue on torchRL, agenthive, or the robohive repo. Apologies if its in the wrong place.
I'm trying to train a PPO agent on the Franka Kitch…
-
This is a loose roadmap of our plans for major changes to Gymnasium:
December:
- [x] Experimental new wrappers
- [x] Experimental functional API
- [x] Python 3.11 support
February / March:
-…
-
### Describe the bug
The default values for many values in `source/extensions/omni.isaac.orbit/omni/isaac/orbit/managers/scene_entity_cfg.py` where changed from `None` to `slice(None)` in commit d6…
-
First of all: Sorry if this doesn't belong here. I'll post this on the stable-baselines3 github if so.
Hello I'm a beginner and I'm facing this problem where I cant load the saved DQN model. I trai…
fqidz updated
6 months ago
-
Not sure this is the desired behavior. The parallel envs will run with the last defined num_threads. Not sure what is best to do by default in this case though.
## To Reproduce
```python
import…
-
It seems like Humanoid environments does not allow rgb_array rendering mode. Is it possible to render the environment frames in a headless server?
-
```
❯ python atari_torch.py (ncps)
2024-04-27 05:40:53,646 WARNING compression…
-
### Proposal
To support more algorithmic works (multi-agent RL, multi-critic learning), it would be great to extend the support of rewards and terminations beyond the box range and bool, respectively…
-
Currently, the lambda multipliers of the constraints are not reset to their original value if the integration step failed. This is an issue because they are used as initial guess for the constraint so…