Closed emjay73 closed 3 years ago
Hi, the pure one is used for estimation with batchsize=1
.
That's right. I forgot to change the batchsize to 1.
I modified IMS_PER_BATCH in the YAML file from 16 to 1 and I got
Changing the batch size didn't show that much difference.
Am I doing something wrong?
Also, when it comes to 'pure time', it's approximately 32.61 fps, which is way faster than the annotated number in the table (26.1 fps).
Can you guess the reason why?
It's a bit strange cuz I'm using 2080ti and didn't expect it to be faster than V100.
FYI. 2080ti vs V100 benchmark https://lambdalabs.com/blog/best-gpu-tensorflow-2080-ti-vs-v100-vs-titan-v-vs-1080-ti-benchmark/
Hi, that's interesting! It seems the model could have better inference time. It could be attributed to the different versions of Detectron (V0.3 used in our test) or different machine settings (CPU, Memory, etc). I'll check it later and renew the table if the inference could be faster. Thanks!
OK. I'm looking forward to it. Close the issue if you want.
Hi,
I just wonder how you measured the runtime of the PanopticFCN.
I executed the evaluation code as follows,
and got two lines of messages from the terminal.
What figure did you use for fps estimation?
The pure one (excluding dataloader time and warmup frame time)? or the other one (including dataloader time and warmup frame time)?
BTW I'm using 2080ti.