junjiehe96 / FastInst

[CVPR2023] FastInst: A Simple Query-Based Model for Real-Time Instance Segmentation
MIT License
175 stars 16 forks source link

About FPS #8

Closed zhouyiks closed 1 year ago

zhouyiks commented 1 year ago

Hi, I tried to reproduce the results of FastInst on CoCo, mainly on FPS (using FastInst-Res50-D3, and --eval-only mode, --num-gpus 1).

  1. On a two 3090 GPUs server and a two A6000 GPUs server, both of them reported pure inference time of ~0.022s (about 45 FPS), which is much higher than that reported in paper and main page of FastInst github. Is there any other post-processing task that didn't count by 'pure inference time'? If True, could you please guide me how to count the full inference time. Or it just caused by GPU/CPU/CUDA/Pytorch/... difference. Looking forward to receiving a response, Thank you.
junjiehe96 commented 1 year ago

Thank you for your interest in our work. FPS depends on the device you are using, and we test it on a V100 (16GB) GPU with a batch size of 1 in the paper.