An OpenAI gym third party environment for CARLA simulator.
Setup conda environment
$ conda create -n env_name python=3.6
$ conda activate env_name
Clone this git repo in an appropriate folder
$ git clone https://github.com/cjy1992/gym-carla.git
Enter the repo root folder and install the packages:
$ pip install -r requirements.txt
$ pip install -e .
Download CARLA_0.9.6, extract it to some folder, and add CARLA to PYTHONPATH
environment variable:
$ export PYTHONPATH=$PYTHONPATH:$YourFolder$/CARLA_0.9.6/PythonAPI/carla/dist/carla-0.9.6-py3.5-linux-x86_64.egg
$ ./CarlaUE4.sh -windowed -carla-port=2000
You can use Alt+F1
to get back your mouse control.
Or you can run in non-display mode by:
$ DISPLAY= ./CarlaUE4.sh -opengl -carla-port=2000
$ python test.py
See details of test.py
about how to use the CARLA gym wrapper.
We provide a dictionary observation including front view camera (obs['camera']), birdeye view lidar point cloud (obs['lidar']) and birdeye view semantic representation (obs['birdeye']):
We also provide a state vector observation (obs['state']) which is composed of lateral distance and heading error between the ego vehicle to the target lane center line (in meter and rad), ego vehicle's speed (in meters per second), and and indicator of whether there is a front vehicle within a safety margin.
The termination condition is either the ego vehicle collides, runs out of lane, reaches a destination, or reaches the maximum episode timesteps. Users may modify function _terminal in carla_env.py to enable customized termination condition.
The reward is a weighted combination of longitudinal speed and penalties for collision, exceeding maximum speed, out of lane, large steering and large lateral accleration. Users may modify function _get_reward in carla_env.py to enable customized reward function.