qgallouedec / panda-gym

Set of robotic environments based on PyBullet physics engine and gymnasium.
MIT License
547 stars 115 forks source link

Clarification about the observation or system state returned by the task class #96

Open wilhem opened 3 months ago

wilhem commented 3 months ago

Hello,

I was studying carefully the code for the panda reach task and 2 questions came up to my mind:

  1. The observation vector returned by the system contains the position of the end-effector of the robot. I wonder, whether it would work if the observation of the system consists of thejoint angles of the robot instead of the position of the end-effector. Theoretically, the agent should be able to learn anyway. Or not?
  2. The reward is calculated based on the distance between the target and the end-effector or, in sparse mode, it consists of only zeros and ones, when the distance < distance_threshold. But in case of sparse reward any DDPG, PPO, SAC agent will fail to learn. How do you train the agent using the sparse reward? Did you use the hindsight experience replay from SB3?

Thanks

qgallouedec commented 3 months ago
  1. Yes! ~It is precisely the config of PandaReachJoints-v3~ edit: my bad, in this environment you still get the ee position.
  2. True again, the sparcity makes the task really hard to learn. For reach, it could work though, but for the other tasks you have very low chance to learn anything. That's why we use tricks like HER, indeed