PacktPublishing / Hands-On-Intelligent-Agents-with-OpenAI-Gym

Code for Hands On Intelligent Agents with OpenAI Gym book to get started and learn to build deep reinforcement learning agents using PyTorch
https://www.packtpub.com/big-data-and-business-intelligence/hands-intelligent-agents-openai-gym
MIT License
366 stars 149 forks source link

wrapper for CARLA 0.9.x #27

Closed anyboby closed 4 years ago

anyboby commented 5 years ago

Hello there! First off, thank you for the dedicated work! I am having an issue when trying to use the carla_env.py wrapper on CARLA 0.9.5. I am aware the wrapper was written for 0.8.x, but sections in the book concerning version updates gave me hope, that you guys might be able to help.

The error message upon running the carla_env.py script is as follows:

`Initializing new Carla server... terminating with uncaught exception of type clmdep_msgpack::v1::type_error: std::bad_cast Signal 6 caught. Malloc Size=65538 LargeMemoryPoolOffset=65554 Malloc Size=65535 LargeMemoryPoolOffset=131119 Malloc Size=115872 LargeMemoryPoolOffset=247008 Error during reset: Traceback (most recent call last): File "/envs/carla_env.py", line 223, in reset return self.reset_env() File "/envs/carla_env.py", line 271, in reset_env scene = self.client.load_settings(settings) File "/envs/carla/client.py", line 75, in load_settings return self._request_new_episode(carla_settings) File "/envs/carla/client.py", line 160, in _request_new_episode data = self._world_client.read() File "/envs/carla/tcp.py", line 73, in read header = self._read_n(4) File "/envs/carla/tcp.py", line 91, in _read_n raise TCPConnectionError(self._logprefix + 'connection closed') carla.tcp.TCPConnectionError: (localhost:54854) connection closed

Clearing Carla server state Initializing new Carla server...` ... and the connection process restarts but keeps failing.

Do you have any suggestions what one could try to fix this error ? thanks in advance!

praveen-palanisamy commented 5 years ago

Hi @anyboby : For CARLA 0.9.x, use the code in this repository: https://github.com/praveen-palanisamy/macad-gym It provides the necessary wrappers for creating OpenAI Gym compatible learning environments for CARLA 0.9.x

ravishk1 commented 5 years ago

Hi @praveen-palanisamy , can we use this wrapper to run your code of A2C for training on carla 0.9.X version? Or Do we have to write each and every reward function again to import carla environment into the agent training ?? As there are only two options are there :- {'HeteNcomIndePOIntrxMATLS1B2C1PTWN3-v0': 'Heterogeneous, Non-communicating, ' 'Independent,Partially-Observable ' 'Intersection Multi-Agent scenario ' 'with Traffic-Light Signal, 1-Bike, ' '2-Car,1-Pedestrian in Town3, ' 'version 0', 'HomoNcomIndePOIntrxMASS3CTWN3-v0': 'Homogenous, Non-communicating, ' 'Independed, Partially-Observable ' 'Intersection Multi-Agent scenario with ' 'Stop-Sign, 3 Cars in Town3, version 0'}

But, if I want to replicate CORL2017 paper then will it automatically take care all the things or we will have to design the environment on macad-gym.

Thanks

praveen-palanisamy commented 5 years ago

Hey @ravishk1 , you can run A2C/A3C agents (& other RL algorithms) in Carla 0.9.x version using the wrapper code in this repository:https://github.com/praveen-palanisamy/macad-gym

Those two environments are sample environments which supports multi-agent RL in Carla 0.9.x. You could create new (single/multi agent) environments by just changing the (JSON-like) env definition file to place the cars and other actors in the scenarios of your choice. MACAD-Gym already has the implementation of the reward function used in the CORL2017 CARLA paper. You can directly use it.

If you need more information, please open an issue on that repository if there are questions about MACAD-Gym so that things are organized better.