Open frangkyy opened 2 years ago
I have the same concern
Thanks.
We follow the recent advances (e.g., FasterSeg, DF1-Seg, BiSeNetV2, and STDC) and utilize "1080Ti+TensorRT" to measure the inference speeds of LPS-Net-S/M/L on the Cityscapes (Table 5), CamVid (Table 6) and BDD100K (Table 7) datasets. The data precision is FP32 (as stated in "Measure the Latency" of this repository and Section 4.2 of the paper).
The DDRNet measures its inference speeds on 2080Ti, which is more advanced than 1080Ti. For your concern, we additionally evaluate the inference speed of DDRNet on the Cityscapes dataset with 1080Ti+TensorRT. DDRNet-23-slim achieves 115.2FPS (1080Ti+TensorRT), which is slightly faster than 101.6FPS (2080Ti+PyTorch), but still slower than LPS-Net-L (151.8FPS).
Thanks for your reply!
这篇论文的结果是在1080ti +TensorRT上对比的,也没说TensorRT采用的什么精度,而DDRNet等是在pytorch上跑的,不能直接比较