-
OS: ubuntu 18.04
python: python3.6.9
atari-py: 0.3.0
i tried decompressing the rom once (producing 2 zip files) and twice (producing two folders). each time i tried ```python -m atari_py.import_r…
-
In the gym FetchPush-v1, env.observation_space['observation'] outputs
Box(-inf, inf, (25,), float32)
However, when I print out 'observation' space length using enjoy.py,
it prints out 26.
`pyth…
-
Hello,
Thanks for the great effort.
I am new to parlai. I am interested in training a BART FiD model on my custom data using gold retrieved passages instead of using a DPR-style retriever.
I under…
-
I'm trying to tune the hyperparameters of the PPO2 with MlpLstmPolicy. Below is my code
```python
import gym, optuna
import tensorflow as tf
from stable_baselines import PPO2
from stable_basel…
-
So I found an explanation of what the obs and actions represent in the BipedalWalker-v2 env here: [doc](https://github.com/openai/gym/wiki/BipedalWalker-v2).
However the min and max values of the o…
-
## Question
Why does the zoo call standard `make_vec_env()` for all environments, including Atari, when sb3 has a special function for it `make_atari_env()`?
## Train of thought
- train.py calls …
-
Hello @ppwwyyxx, this is a continuation of Issue 2166, but I couldn't figure out how to reopen that one. Please allow this issue to stay open until resolved. I have clearly identified a real problem…
-
Hello,
You advertise tianshou as being fast and provide in the readme a comparison table.
However, no reference code is linked to reproduce the results.
So, I decided to create a colab notebook…
-
I think it's still TF experts right now (incompatible with our repo since torch port). Addresses part of #215.
-
The Lunar Lander example in the getting started documentation:
https://stable-baselines3.readthedocs.io/en/master/guide/examples.html
This example creates the Lunar Lander environment: env = gym…