tusen-ai / SST

Code for a series of work in LiDAR perception, including SST (CVPR 22), FSD (NeurIPS 22), FSD++ (TPAMI 23), FSDv2, and CTRL (ICCV 23, oral).
Apache License 2.0
788 stars 100 forks source link

Speed up the infer process of FSD++ #159

Closed eagle-chase closed 1 year ago

eagle-chase commented 1 year ago

I trained a 200m range fsd++ model (6 frames) on my own dataset. Is the standard configuration of fsd++ inference on a single GPU and a batch size of 1? The inference speed is only about 1 fps on an RTX 4090(including loading data). Because of the cache queue, I don’t know how to increase the batch size or use multiple GPUs. Do I need to use a special sampler when reading data from the data loader?Or use multiple cache queues? Can I get some suggestions?I want to increase the speed of inference.

eagle-chase commented 1 year ago

I use the two stages model the same as Waymo config. Is this kind of inferring speed normal?

Abyssaledge commented 1 year ago

I believe think 1fps is too slow. Is it the speed of the 6-frame FSD model or the 6-frame FSD++ model? Could you have a normal speed in waymo dataset?

eagle-chase commented 1 year ago

It is normal now, thank you.@Abyssaledge