-
### Describe the bug
Wandb sweep agent cuts in the thread of Isaac Sim before calling app.update(), which causes the process to hang forever.
### Steps to reproduce
```python
# Copyright (c)…
-
Hi, would it be possible for robo-gym to be upgraded from gym to gymnasium? Gymnasium is the maintained version of openai gym and is compatible with current RL training libraries ([rllib](https://gith…
-
Error:
```
$ python rl.py
Traceback (most recent call last):
File "rl.py", line 22, in
from pybrain.rl.environments.mazes import Maze, MDPMazeTask
ImportError: No module named pybrain.rl.envir…
-
- [x] I have marked all applicable categories:
+ [ ] exception-raising bug
+ [ ] RL algorithm bug
+ [x] documentation request (i.e. "X is missing from the documentation.")
+ [ ] ne…
-
Hi there,
As mentioned [here](https://github.com/NVIDIA-Omniverse/Orbit/discussions/108#discussion-5511668), I'm trying to use torchrl with [NVIDIA Orbit](https://github.com/NVIDIA-Omniverse/Orbit)…
-
### 🐛 Bug
I am encountering an issue when trying to train my donkeycar simulator agent using the train.py script from rl-baselines3-zoo. While I can successfully import and call the environment using…
-
1. When I run the OVMM 'evaluate' script,
```
#/bin/bash
export MAGNUM_LOG=quiet
export HABITAT_SIM_LOG=quiet
set -x
python -u -m habitat_baselines.run --config-name=ovmm/rl_discrete_skil…
-
Hello,
I am trying to run the training code main.py via anaconda with python 3.7.4, carla 0.9.9.4 tf=2.3.1
could you please help me fixed the error
![Screenshot (31)](https://user-images.githubuser…
-
## 🐛 Bug
We get a file not found error when looking for the NetHack temporary directory after running our RL algorithm for > 1M steps. This error doesn't occur on all runs, only some runs get this …
-
tensorforce: 0.6.5
python: 3.10.4
```
[user 0.6.5]$ python3 ../quick.py
Traceback (most recent call last):
File "/home/user/0.6.5/../quick.py", line 4, in
environment = Envir…