OS Platform and Distribution (e.g., Linux Ubuntu 18.04): Ubuntu 20.04
Apollo version (3.5, 5.0, 5.5, 6.0): 7.0
I am currently using Apollo 7.0.0v's E2E mode, but I'm experiencing an issue where the model inference used time is excessively long, causing the planning to not function properly. I am using an RTX 3090 24GB GPU, and I have confirmed that the car moves normally when using the NO_LEARNING mode instead of E2E_TEST. The ADEBUG log is as follows:
System information
I am currently using Apollo 7.0.0v's E2E mode, but I'm experiencing an issue where the model inference used time is excessively long, causing the planning to not function properly. I am using an RTX 3090 24GB GPU, and I have confirmed that the car moves normally when using the NO_LEARNING mode instead of E2E_TEST. The ADEBUG log is as follows:
I0413 09:36:50.617738 12897 pnc_map.cc:501] [DEBUG] lanes:2 I0413 09:36:50.617749 12897 pnc_map.cc:557] [DEBUG] distance0.00429093 I0413 09:36:50.633669 12895 planning_component.cc:76] [DEBUG] Received traffic light data: run traffic light callback. I0413 09:36:50.697307 12887 pnc_map.cc:501] [DEBUG] lanes:2 I0413 09:36:50.697319 12887 pnc_map.cc:557] [DEBUG] distance0.00429093 I0413 09:36:50.777606 12893 pnc_map.cc:501] [DEBUG] lanes:2 I0413 09:36:50.777618 12893 pnc_map.cc:557] [DEBUG] distance0.00429093 I0413 09:36:50.818557 12894 trajectory_imitation_libtorch_inference.cc:277] [DEBUG] trajectory imitation model inference used time: 4422.78 ms.
This means that the following line of code takes more than 4 seconds to execute:
at::Tensor torch_outputtensor = model.forward(torch_inputs).toTensor().to(torch::kCPU);
I do not think that my GPU is insufficient for running this model. How can I resolve this issue?