nicrusso7 / rex-gym

OpenAI Gym environments for an open-source quadruped robot (SpotMicro)
Apache License 2.0
973 stars 129 forks source link

Issue with multiple envs and determinism #27

Open araffin opened 3 years ago

araffin commented 3 years ago

Hello,

Thanks for the project, it looks awesome.

I've been trying to use Stable-Baselines3 on it (we created a fork to register the gym env: https://github.com/osigaud/rex-gym) and could train an agent on it, however after training or when using a second env for testing, we could not reproduce the results.

Do you know what can change between two instantiation of the environment? It seems that the observation provided to the agent is somehow quite different to the one seen during training. (we are testing simple walk forward on a plane)

I'm using the RL Zoo to train the agent (and remove any kind of mistake from my part). It works with other pybullet envs perfectly (e.g. with HalfCheetahBulletEnv-v0) but not with rex-gym :/

Additionally, it seems that the env is not deterministic, could you confirm? And do you know why?

PS: if needed I can provide a minimal example to reproduce the issue

nicrusso7 commented 2 years ago

Hi, thanks for reaching me.

This project is more an experiment, started from the Minitaur open source examples you can find in the pyBullet repository.

Frankly, I don’t have a deep knowledge into this field and I’m still learning the basis of ML applied to robotics.

Happy to support you to understand why you cannot reproduce the results anyway. I’m afraid this is all about the action step frequency.