Farama-Foundation / Arcade-Learning-Environment

The Arcade Learning Environment (ALE) -- a platform for AI research.
https://ale.farama.org/
GNU General Public License v2.0
2.14k stars 420 forks source link

ALE v0.6 differences in start state #291

Open JesseFarebro opened 4 years ago

JesseFarebro commented 4 years ago

There was an issue raised (https://github.com/openai/gym/issues/1777) which describes differences between v0.5.2 and v0.6.0 of the ALE. I traced some of the issues to this commit https://github.com/mgbellemare/Arcade-Learning-Environment/commit/7bff96b4b64edcffbeb2d9bb83b1685ab506ea2b#diff-d9d868097a7403416e6ef352d95dc4feR178 which changes how StellaEnvironment::softReset works.

The RESET action is called m_num_reset times which leads to a different starting state for the agent. Perhaps this was intended behaviour in StellaEnvironment::reset but has ill-intended consequences in StellaEnvironment::softReset.

For example, here are the starting states for Ms. Pacman in ALE v0.5.2 and v0.6.0. Note if you emulate one RESET action then we get the v0.5.2 starting state.

Ms. Pacman, ALE v0.5.2

frame-v0 5 2

Ms. Pacman, ALE v0.6.0

frame-v0 6 0

You can see the subtle changes between these two frames (e.g., the colour of ghosts in jail).

I haven't looked into why we repetitively call RESET. Should this be something that is investigated further? It wouldn't seem that this should affect asymptotic performance.

mgbellemare commented 4 years ago

I wouldn't expect this to be a big driver of performance, no. The ALE determinism has always been brittle at best -- going through saveState/loadState should provide a more robust way to reproducibility. Thanks for flagging this!