hustvl / MapTR

[ICLR'23 Spotlight] MapTR: Structured Modeling and Learning for Online Vectorized HD Map Construction
MIT License
983 stars 151 forks source link

Inference speed on RTX 2080Ti Vs. RTX 3090 #34

Open wenjie710 opened 1 year ago

wenjie710 commented 1 year ago

Hi, the work is awesome and thanks for sharing the source code. I find it confusing that we get much higher inference speed on RTX 2080Ti compared to the provided results on RTX 3090. We run the benchmark.py with the provided checkpoints, and get 14.2 fps (11.2 fps in the paper) with maptr-tiny and 45 fps (25.1 fps in the paper) with maptr-nano. Is there something omitted? Or is the running speed in the paper provided with no fp16 operation during inference? Hope to hear from you soon.

LegendBC commented 1 year ago

Thanks for your kind remind @wenjie710 ! We rerun the speed benchmark script on a clean 3090 card, The speed of MapTR-nano is 48.2 fps. and the speed of MapTR-tiny is 18.4 fps. The latency result in our paper comes from our initial code and the benchmarked environment is not clean.

wenjie710 commented 1 year ago

Thanks for the response. I am so curious and the performance boost is so huge. I wonder what is the primary difference between the initial code and the current one. Is it related to the fp16 operation?

LegendBC commented 1 year ago

It's not related to the fp16 operation. It should be attributed to the original unclean code and unclean environment.

ibanana97 commented 11 months ago

Does benchmark.py need to use 8 GPUs for distributed running?