Closed abhigoku10 closed 4 years ago
@skyhehe123
@abhigoku10 The model currently only supported GPU due to spconv's compatibility. The inference time is 25 FPS in 2080Ti, it is measured by setting the batch_size=1. The actually elapsed time is starting from loading the point cloud from the .bin file and ending at producing bounding-box.
@skyhehe123 i have few queries on the inference Q1.IIn the paper you have metioned the inference time as 25fps , can you let me knw how you have calculated the inference time ?? is it by keeping the batch size high over a given set of frames Q2 can we run the current inference code on cpu? what is the expected inference drop
Thanks in advance