-
Right now, one of the biggest weaknesses of the Gym API is that Done is used for both truncation and termination. The problem is that algorithms in Q learning family (and I assume others), depend on t…
-
Hi,
I try to use your Atari example in doc, but I receive an error about numpy. I couldn't find a solution about that. I even delete my anaconda and setup environment from scratch. I shared detail…
-
We should update the version of ALE used in atari-py to get at least the following changes:
* https://github.com/mgbellemare/Arcade-Learning-Environment/pull/265
* https://github.com/mgbellemare/A…
-
Bug 1.
Gym version number is 0.21.0 while the most recent release is 0.22.0
https://github.com/openai/gym/blob/master/gym/version.py
https://github.com/openai/gym/releases/tag/0.22.0
Bug 2.
I…
-
Chopper Command sometimes does not terminate when the agent runs out of lives, instead continuing with an empty battlefield until exactly 21,600 timesteps have elapsed (i.e. 108k frames with frameskip…
-
I believe you keep a close eye on the ALE repo, but just as additional FYI, there is an optimization PR from @qstanczyk that inlines some code for additional performance, and will be merged soon: http…
-
`gym.make('ALE/Breakout-v5', render_mode = 'human')` does not work for me, it gives me an Arcade Learning Environment Window which is stuck on not responding and if I try to close it, the kernel dies.…
-
Hello,
I followed the instructions to import ROMS, however, I received this message:
> python -m atari_py.import_roms ./Roms/ROMS
> Traceback (most recent call last):
> File "/Users/rosen/op…
-
I'm implementing a wrapper of the V5 environment that includes frame skipping and stacking, etc
Looking at the default constructor, frame skipping is set to 5. Why?
Am I missing a paper that explai…
-
To dump(clone) or load(restore) underlying game state, one can use the following methods:
```
e_state = env.unwrapped.clone_state() # returns a 1-D vector
env.unwrapped.restore_state(e_state)
…