Add the Chrome T-Rex rush game to PLE. It should be a fun game for learning & playing. :smile:
Here's a random agent playing the game:
Game Spec
Observation Space
width: 600
height: 150
depth: 3
pixels in uint8
Action Space
NO_OP
JUMP (space key)
DUCK (down key)
STAND (up key)
The reason we add a STAND action is that somehow gym-ple doesn't propogate NO_OP to our game which makes our dino not able to stand up again after ducked once.
Rewards
0 for ticking, 1 for passing each obstacle and -5 for game over.
We use predefined rewards from base game.
We also limit a high score to 50 (could be configured with max_score). A fun fact is that when I first try to train my agent, there's an episode lasts about 2 hours (passed 3K obstacles).
Simplification
We removed cloud and high score board in this implementation to make the observation cleaner for agents. Maybe we should add an option for this?
Credit
The implementation depends largely on @shivamshekhar 's Chrome-T-Rex-Rush.
Add the Chrome T-Rex rush game to PLE. It should be a fun game for learning & playing. :smile:
Here's a random agent playing the game:
Game Spec
Observation Space
Action Space
The reason we add a
STAND
action is that somehowgym-ple
doesn't propogateNO_OP
to our game which makes our dino not able to stand up again after ducked once.Rewards
0 for ticking, 1 for passing each obstacle and -5 for game over. We use predefined rewards from base game.
We also limit a high score to 50 (could be configured with max_score). A fun fact is that when I first try to train my agent, there's an episode lasts about 2 hours (passed 3K obstacles).
Simplification
We removed cloud and high score board in this implementation to make the observation cleaner for agents. Maybe we should add an option for this?
Credit
The implementation depends largely on @shivamshekhar 's Chrome-T-Rex-Rush.