Closed thomashirtz closed 1 year ago
Hi @thomashirtz I think the intention is that you should be able to use the trainers without using the physics engine itself. However, the wrappers muck this up a bit at the moment. While you should only need to implement the Environment interface, I would pay attention to wrap_for_training
here (https://github.com/google/brax/blob/main/brax/training/agents/ppo/train.py#L115-L116) and then make sure EpisodeWrapper and AutoResetWrapper work as intended for your environment!
Hello,
I am currently trying to replicate the structure of brax environment to try to train them with the algorithms present in the library, SAC, PPO etc.
My environment is not related to the physic engine, currently I am putting the environment state inside the
state.qp
. I can now run jitted version of my environment. However when I am trying to train SAC with it, I am starting to have some obscure errors.The main question of this post is, to the best of your knowledge, is it possible to use custom environment with this library, or is there some things in the framework related to the physic simulation that will make it not work. Moreover, is there some gotcha I should pay attention to.
Thanks !