Hi, I tried to reproduce the results of FastInst on CoCo, mainly on FPS (using FastInst-Res50-D3, and --eval-only mode, --num-gpus 1).
On a two 3090 GPUs server and a two A6000 GPUs server, both of them reported pure inference time of ~0.022s (about 45 FPS), which
is much higher than that reported in paper and main page of FastInst github. Is there any other post-processing task that didn't count
by 'pure inference time'? If True, could you please guide me how to count the full inference time. Or it just caused by
GPU/CPU/CUDA/Pytorch/... difference.
Looking forward to receiving a response, Thank you.
Thank you for your interest in our work. FPS depends on the device you are using, and we test it on a V100 (16GB) GPU with a batch size of 1 in the paper.
Hi, I tried to reproduce the results of FastInst on CoCo, mainly on FPS (using FastInst-Res50-D3, and --eval-only mode, --num-gpus 1).