Closed PhanindraParashar closed 3 years ago
Hello,
This repo (master branch) contains only the simulation environment we have developed and used in the H-ReIL paper. The full training pipeline is not available online, as it makes use of multiple other works, but it should be easy to replicate it. Simply, you can (i) wrap the scenario in example_intersection.py as an OpenAI Gym environment, (ii) collect some data on it (using either the policy we provided, some human-collected data or any other control policy), (iii) train behavioral cloning or CoIL policies using those data, and finally (iv) train the high-level RL policy using OpenAI baselines or stable-baselines.
The step (i) above also makes it possible to run any OpenAI-Gym-environment-compatible RL algorithm (e.g. OpenAI baselines and stable-baselines) to work on CARLO. For an example of this, you can see this repository that we created for a class homework: https://github.com/PrinciplesofRobotAutonomy/CS237B_HW3. Inside the gym_carlo/envs directory, you can find a few different CARLO scenarios wrapped as OpenAI Gym environments.
Hope this helps!
How can we start using this simulator for training ?