StanfordVL / iGibson

A Simulation Environment to train Robots in Large Realistic Interactive Scenes
http://svl.stanford.edu/igibson
MIT License
670 stars 160 forks source link

Code of Robot Navigation from iGibson 1.0 Paper #149

Closed rudrapoudel closed 2 years ago

rudrapoudel commented 2 years ago

Couldn't find the code of robot navigation from iGibson 1.0 paper,

  1. Is it available? Closest pointer?
  2. Is task: point_nav_random with iGibsonEnv is enough to reproduce the results with good policy algorithm (eg. PPO or SAC)? Or we also need to further tune task i.e. rewards etc?

Thank you for awesome simulator!

sycz00 commented 2 years ago

Hey

I had a similar question back then. I found stable_baselines3_example.py in igibson/examples/demo. Which uses PPO in SB3 and provides the agent RGB + Depth information in addition with some proprioceptive inputs.

roberto-martinmartin commented 2 years ago

Hi @rudrapoudel and @sycz00 , We had two navigation tasks in iG1: object navigation (to a lamp) and point navigation based on Lidar. I could ask the students to share the old code, however, the API has changed in iGibson2 and may be hard to use as is.

Nevertheless, @sycz00 is right: the code in examples/learning/stable_baselines3_example.py should get you started to a similar point. You could change the observations and/or the scene generation to obtain similar policies as in iG1.

About the second question: yes, it should converge as is.

I hope this helps!