-
### to-do-list
- [x] Implementation of DDPG + HER only. https://github.com/ropiens/universal-option-framwork-pytorch/pull/3
- [x] Implementation of high-level policy and DIOL. https://github.com/rop…
-
I was trying out `this_works_1_18.ipynb`. When I specified single stock instead of DOW30, it gave me `KeyError: true`:
![image](https://user-images.githubusercontent.com/4706946/110224843-7cc85580-…
-
**Important Note: We do not do technical support, nor consulting** and don't answer personal questions per email.
Please post your question on the [RL Discord](https://discord.com/invite/xhfNqQv), [R…
-
### Question
I'm attempting to use RL-Zoo for hyperparameter tuning on a DDPG agent and custom environment, is there currently a guide or description of this process?
### Additional context
I…
-
Refer https://github.com/yamatokataoka/reinforcement-learning-replications/issues/59
-
I am using zoo to train custom env with DDPG +HER. Images (64, 64, 4) and positions (goal (3,1) )are my input.
The warning occurred:
```
Training and eval env are not of the same type !=
```
…
-
### Question
I am using video recorder from the stable-baselines3 tutorial on Colab with a custom env
### Additional context
```python
import os
os.system("Xvfb :1 -screen 0 1024x768x24 &")
…
-
Hello, I compare the result you report in the README with the spinningup benchmark, your d4pg and d3pg are much worse than spinningup in the HalfCheetah-v2 env, maybe you should improve your implement…
-
### Question
I am using DDPG+HER with images (64, 64, 4) and positions (3,1) as input, using rl-baselines3-zoo. I tried to reduce buffer size, but GPU is still not used. Also, is it possible to a…
-
Let's start the `project-sandwich-man`
_kick-off : 21/08/04_