-
Hi,
I want to use Dopamine with my own game environnement.
I don't use asterix, pong ou atari environnement.
Dopamine allows to do stuff like that ?
Thanks
-
- High: It blocks me to complete my task.
Hi, I’m new to OpenAI Gym and RLlib. SO my question may be dumb.
I'm using
Anaconda python 3.9
Gym 0.21.0
Ray 1.12.1
Tensorflow 2.8
…
-
Hi,
Thanks for sharing the algorithms. I am trying to implement the algorithm (e.g. AAC) with model in OpenAI Gym (e.g., Walker2d) while it meets some dimensional problem. I am curious about whethe…
-
The function get_action_space_info does not return the action space when the action space contains a Tuple. It will just return `{'name': 'Tuple'}`. Tested on Copy-v0.
-
How do I add a custom RL algorithm? is there a file I need to modify ? a class that I need to implement? Any documentation will be great! I am new to gym and ray/rllib.
-
Make environment available through gym.make
-
Hi
I am using `env = normalize(GymEnv("CartPole-v0")) `to create an environment. It works well, but all the visualizations lasts maximum for 5 seconds and after that they stop even if the pole was …
-
Hi,
I have tried to execute gym-lgsvl.
When I have execute the following command
`python -m baselines.run --alg=a2c --env=gym_lgsvl:lgsvl-v0 --num_timesteps=1e5`
I got an Attribute Error
`Attr…
-
Dear Snehal,
thanks for sharing your great and extremely interesting research achievement. This code here is related to simulation in OpenAI Gym environments for CartPole, LunarLanderContinuous and…
-
Hi,
very nice work. I am trying to test your implementation of the DDPG algorithm, but I don't find a way to properly execute all you environments with the DDPG algorithm. Can you provide me the co…