Luca96 / carla-driving-rl-agent

Code for the paper "Reinforced Curriculum Learning for Autonomous Driving in CARLA" (ICIP 2021)
MIT License
104 stars 26 forks source link

FPS is too low #12

Open obadul024 opened 2 years ago

obadul024 commented 2 years ago

Hi there, I cloned the your fantastic repo and started to run some experiments. There is an issue I am facing however, the FPS is stuck at 2. No matter what I tried it simply cannot run any faster. I tried it on an evaluation and training experiment.

I can manage to run Carla as a server at 60 FPS no issues. But when I run the main script, it just simply doesn't work.

I would love to have some pointers. Thanks for your help. Cheers

Luca96 commented 2 years ago

Hi, thanks for the compliment.

Yeah, I know FPS is an issue. With the hardware I had (i9 + RTX 6000), I was able to achieve about 10-15 fps, which is still low (probably due to some poorly optimized code..) Anyway, you could try these:

I guess it's all you can try, hope it helps a bit.

obadul024 commented 2 years ago

Hi there,

Thank you so much for such a detailed explanation and helpful information and pointers. Literally am beaming right now. This is fantastic help. Thank you once again.

I shall try all of these solutions and let you know if I have any questions.

Thanks for your time and efforts. Appreciate it.

Obaid

On Fri, Apr 29, 2022 at 11:46 AM Luca Anzalone @.***> wrote:

Hi, thanks for the compliment.

Yeah, I know FPS is an issue. With the hardware I had (i9 + RTX 6000), I was able to achieve about 10-15 fps, which is still low (probably due to some poorly optimized code..) Anyway, you could try these:

I guess it's all you can try, hope it helps a bit.

— Reply to this email directly, view it on GitHub https://github.com/Luca96/carla-driving-rl-agent/issues/12#issuecomment-1112973216, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGTT2EQ42DM4SKITBQUGMMTVHOHWDANCNFSM5UOFR6OQ . You are receiving this because you authored the thread.Message ID: @.***>

adhocmaster commented 2 years ago

Hi Luca!

I am running the model in one GPU and carla in another. Both GPU utilization is under 6%, CPU under 60%. I have RTX 2070 and i7. I am getting 1-3 FPS max.

I was wondering if it's possible to run pygame client in another process and process the sensor data parallelly.

Luca96 commented 2 years ago

Hi, apology for the late response..

In principle it should be possibile but in practice is useless since the agent makes sequential decisions: it has to first wait for the sensor data, which are then fed to the neural nets, that finally outputs the action for time t.

I have to check the code and see if it's possible to optimize the neural nets (e.g. use more @tf.function), and look for eventual bottlenecks that prevent full GPU utilization (probably due to data transfer between CPU and GPU, related to the memory buffer).

The fact is that RL is mainly sequential (at most you can have a bunch of environments in parallel): you run your environment/simulator for N steps (but at each step you use your NN to predict on a batch of size 1 - the state - and so here you can't fully leverage vectorization and data parallelism), then the N experience tuples are stored into a memory buffer, from which you later retrieve a batch of B examples to train the NN to improve. Basically, all of this repeats until convergence or when the maximum amount of steps is exceeded.