Open lyonsracing opened 2 years ago
Hi, sorry for the late response.
What issues are your facing, and what you would like to research?
I anticipate that the agent is specifically designed to work only with the CARLA environment provided into the code. So if you're looking for extensions your have to probably work on the environment side too.
No worries, thank you for taking the time to respond.
A short background of my research entails implementing a RL agent trained in the CARLA environment. My intent is to extract your trained agent (curriculum combined with PPO) and use it in the basic CARLA environment i.e., have the agent drive around the environment while I gather data. My main question would be, is it possible to extract the agent and use it in the CARLA environment. If so, how might I be able to extract the agent you trained.
I initially tried to use the PPO agent from your code in my CARLA environment. Upon not be able to do so I decided that I may need to train the agent before being able to use it. I have been able to run the training via the main.py script. I noticed that the data was being save into the logs, however I did not notice any weights being saved during the training which would update the policy as the agent trained. At this point in time, I am still looking for ways to either train the agent and use it as intended or extract the agent you trained to use it in my environment.
I appreciate your time and any advice you have for me would be greatly appreciated. Thank you
ok, so:
traces
. It should be also possible to collect data while learning: see here.learning.state_sX(...)
(as showed here) by default saves the agent's weights at each episode once completed, unless specified otherwise. You should look for a folder called weights/stage-sX
or similar.weights_dir=<your-path-to-weights>
, then call agent.load()
(if you named the agent "agent"). I don't remember if there is an example about it, somewhere...time_horizon=1
. Moreover, you can further customize the envs in terms of sensors: e.g. you can have more RGB cameras, but also depth cameras, lidar and so on.Hope it helps a bit
Thank you so much. This is a lot of information which helps a lot. I will look into all these key points moving forward. I appreciate your time and help!
Hello,
I was wondering if it's possible to implement the agent you have trained in another environment. I am looking to use a trained agent in Carla, but I am have some issues utilizing the agent you trained. Any advice on how to utilize your trained agent for research?
Thank you