Hello, Mr. Everet.
I have two questions.
1- how can i run 'run_trajectory_dataset_creator.sh' faster. I tried to run this code with 4 agent and pretrained policy 'CADRL'
But the time of create dataset is about 55 trajectory in a minute, and I need at least three million.
I tried to comment line of "164 rewards = self._compute_rewards()" in "collision_avoidance_env.py" for speed up, but I got about 60 trajectory in a minute. That is very slow.
2 - how can i train "rl_collision_avoidance(ga3c-cadrl)" with graphics card?
We didn't spend much time optimizing for computation speed. The CADRL policy in particular uses quite outdated custom NN implementations that likely contribute to slow runtime. If you have access to multiple machines or could run on a bunch of cloud/AWS machines in parallel, that would be one way to generate more samples faster.
Changing this line to use your gpu should do it. In my experience, using CPU led to a higher number of samples per second, maybe because there were no images or conv layers in the NN architecture. But, I would be interested to hear about your experience on this.
Hello, Mr. Everet. I have two questions. 1- how can i run 'run_trajectory_dataset_creator.sh' faster. I tried to run this code with 4 agent and pretrained policy 'CADRL' But the time of create dataset is about 55 trajectory in a minute, and I need at least three million. I tried to comment line of "164 rewards = self._compute_rewards()" in "collision_avoidance_env.py" for speed up, but I got about 60 trajectory in a minute. That is very slow.
2 - how can i train "rl_collision_avoidance(ga3c-cadrl)" with graphics card?