Closed jakegrigsby closed 9 months ago
This seems to have been fixed by a few showdown server restarts... not sure what I was doing wrong. I've updated the example in case anyone else needs it.
This is a bit unrelated but I am curious why the Gen9EnvSinglePlayer.action_to_move
logic includes previous generation features like mega-evolution and dynamaxing. https://github.com/hsahovic/poke-env/blob/03b02729f756c7c4b2fb0156af761d2df540757d/src/poke_env/player/env_player.py#L515-L535
When these actions fail they are falling back to a random choice, so is this an intentional decision to have redundant actions in order to make the action space size consistent with gen 8?
Hi, does anyone have a minimal example of how to step a gym env in the modern gymnasium API in a single thread that is separated from keras-rl? I've been looking through the docs/examples/tests and it seems like this is moving away from the
EnvPlayer
subclasses toOpenAIGymEnv
with a few extra abstract methods to define (?). I can initialize a simple env as shown inexamples/openai_example.py
, but cannot get it toreset
orstep
in the main thread without timing out.