ApolloAuto / apollo

An open autonomous driving platform
Apache License 2.0
25k stars 9.67k forks source link

Model inference time of Apollo 7.0.0v E2E mode is too long #14882

Open mgcho0608 opened 1 year ago

mgcho0608 commented 1 year ago

System information

I am currently using Apollo 7.0.0v's E2E mode, but I'm experiencing an issue where the model inference used time is excessively long, causing the planning to not function properly. I am using an RTX 3090 24GB GPU, and I have confirmed that the car moves normally when using the NO_LEARNING mode instead of E2E_TEST. The ADEBUG log is as follows:

I0413 09:36:50.617738 12897 pnc_map.cc:501] [DEBUG] lanes:2 I0413 09:36:50.617749 12897 pnc_map.cc:557] [DEBUG] distance0.00429093 I0413 09:36:50.633669 12895 planning_component.cc:76] [DEBUG] Received traffic light data: run traffic light callback. I0413 09:36:50.697307 12887 pnc_map.cc:501] [DEBUG] lanes:2 I0413 09:36:50.697319 12887 pnc_map.cc:557] [DEBUG] distance0.00429093 I0413 09:36:50.777606 12893 pnc_map.cc:501] [DEBUG] lanes:2 I0413 09:36:50.777618 12893 pnc_map.cc:557] [DEBUG] distance0.00429093 I0413 09:36:50.818557 12894 trajectory_imitation_libtorch_inference.cc:277] [DEBUG] trajectory imitation model inference used time: 4422.78 ms.

This means that the following line of code takes more than 4 seconds to execute:

at::Tensor torch_outputtensor = model.forward(torch_inputs).toTensor().to(torch::kCPU);

I do not think that my GPU is insufficient for running this model. How can I resolve this issue?

AlexandrZabolotny commented 1 year ago

You can try like here