kochlisGit / Autonomous-Vehicles-Adaptive-Cruise-Control

An implementation of an Autonomous Vehicle Agent in CARLA simulator, using TF-Agents
19 stars 8 forks source link

TypeError: No to_python (by-value) converter found for C++ type: class std::vector<unsigned char,class std::allocator<unsigned char> > #3

Open RimelMj opened 3 years ago

RimelMj commented 3 years ago

Hi, thank you for sharing your work! It's been really helpful. Unfortunately I have this error when I run "python straight_lane_agent_c51_training.py". lambda sensor_data: self._sensor_callback(SensorType.COLLISION_DETECTOR, sensor_data) File "D:\Carla9.9\WindowsNoEditor\PythonAPI\code\agent\simulation\simulation.py", line 140, in _sensor_callback data = sensor_data.other_actor.semantic_tags TypeError: No to_python (by-value) converter found for C++ type: class std::vector<unsigned char,class std::allocator >

Can you please help me? Thank you.

kochlisGit commented 3 years ago

Hello @RimelMj

This is a weird error. Maybe It' the carla version that you are using... Perhaps If you download the latest version of Carla (0.9.11), in which i wrote the scripts, then the error is fixed.

RimelMj commented 3 years ago

I am using Carla.0.9.9.4 version, I'll try the latest one thank you!

kochlisGit commented 3 years ago

Yes, please, and tell me if this solution worked for you

RimelMj commented 3 years ago

It worked fine thank you for your help!

RimelMj commented 2 years ago

Hi again! I have another question. Is it normal that I've been training this code using transfer learning (because sometimes the simulator freezes up so I start training again using the latest checkpoints) for more than two weeks and I still get negative average returns? Thank you!

kochlisGit commented 2 years ago

Hello, No It's fine! I have been doing this myself, because It's too hard for the GPU to both render the environent and train the agent. However, If you want to solve this problem, You will have to run Carla Simulator on a docker (in Linux or WLS) and disable the simulation rendering, according to the documentation here:

https://carla.readthedocs.io/en/latest/build_docker/

Another solution to this is to store the replay buffer on disk alongside with the agent's policy. Once You start the simulation again, the agent will be trained from the point it crashed. It is demonstrated here:

https://github.com/tensorflow/agents/blob/master/docs/tutorials/10_checkpointer_policysaver_tutorial.ipynb

RimelMj commented 2 years ago

Thank you for your solutions :) May I ask how much time did it take you to reach convergence ?

kochlisGit commented 2 years ago

C51 Agent makes complicated computations, which means It takes lots of time. More than 250000 steps are required. If You have a fast GPU that won't be a problem. If You can't wait that long, then You would have to disable the rendering as I mentioned above.

Another Thing You might wanna try is use another agent. From my experience, PPOAgent is quite faster than this agent and can achieve astonishing results as fast as C51. I have made an example of how to use this agent here:

https://github.com/kochlisGit/DRL-Frameworks/blob/main/tf-agents/ppo_train.py

I haven't been tested it on Autonomous Driving yet, but on OpenAI Gym the PPOAgent sometimes achieves the same results as C51, but faster.

RimelMj commented 2 years ago

Well I am working on Windows and I am using a CPU so I guess that's why the training is so slow.

kochlisGit commented 2 years ago

Well, Unless You are willing to wait 1-2 weeks, You will have to use one of the above solutions. You are training a vehicle here...

RimelMj commented 2 years ago

Yes, I'll use the checkpoint policy saver! Thank you!