xzz777 / SCTNet

Official implementation of SCTNet (AAAI2024)
MIT License
146 stars 10 forks source link

[Q] Did you evalulate models also in Tensor-RT (FP-16) ? #26

Open yellofi opened 3 months ago

yellofi commented 3 months ago

First of all, I appreciate your amazing work.

The model inference time via Tensor-RT is much faster than the torch in your research paper. I wonder if you evaluated models via both torch and Tensor-RT (FP16), or only torch.

Could the degradation occur on Tensor-RT (FP16)?

xzz777 commented 2 months ago

Hello, I have evaluated our method using Tensor-RT (FP16), and the mIoU values are consistent with those measured in PyTorch, without any degradation. Unfortunately, I don't have the specific numerical values at hand right now. However, I can confirm that on Cityscapes, the accuracy of TRT-FP16 aligns with the reported accuracy in the paper for Torch-FP32. Additionally, the accuracy of TRT-INT8 is approximately 0.7 to 1.0 mIOU lower than Torch-32.