lartpang / MINet

CVPR2020, Multi-scale Interactive Network for Salient Object Detection
https://openaccess.thecvf.com/content_CVPR_2020/html/Pang_Multi-Scale_Interactive_Network_for_Salient_Object_Detection_CVPR_2020_paper.html
MIT License
237 stars 29 forks source link

how do you calculate the inference time #5

Closed sjf18 closed 4 years ago

sjf18 commented 4 years ago

Hi, thanks for your great work, I'm curious how do you calculate your inference time, when I'm running a single image on a Tesla V100 using your minet demo, it's far from reaching 86fps. And I also calculate your minet-res50's FLOPs and params, 162.38G and 87.06M, they are so big, how can it run so fast in your paper?

lartpang commented 4 years ago

As we mentioned in the paper, this time is forward reasoning time. It is obtained by reading a single picture in turn on a dataset (ECSSD), counting all the forward reasoning time (contains only output_tensor=model (input_tensor)), and then averaging it.

And, my device is 1080 Ti, and the input is a 320*320 RGB image.

lartpang commented 4 years ago

@sjf18 Thank you for your attention. I provided the code I used to test FPS. Welcome to point out the problem. https://github.com/lartpang/MINet/blob/master/code/utils/cal_fps.py

As for the problem that you said that the number of MINet_Res50 parameters is large, I suggest you try the version compressing the channel (https://github.com/lartpang/MINet/blob/master/code/module/MyLightModule.py), which seems to have little impact on performance. And you can use a larger batchsize to improve the performance.

sjf18 commented 4 years ago

@lartpang it's my misundertands, thank you!

sjf18 commented 4 years ago

@sjf18 Thank you for your attention. I provided the code I used to test FPS. Welcome to point out the problem. https://github.com/lartpang/MINet/blob/master/code/utils/cal_fps.py

As for the problem that you said that the number of MINet_Res50 parameters is large, I suggest you try the version compressing the channel (https://github.com/lartpang/MINet/blob/master/code/module/MyLightModule.py), which seems to have little impact on performance. And you can use a larger batchsize to improve the performance.

i have read your codes, when you are using pytorch, if you want to test your time in cuda, you need a torch.cuda.synchronize()

lartpang commented 4 years ago

@sjf18 I have fixed the code for testing FPS in current commit.

Thank you for pointing out the mistake and it will be corrected in the later version of the paper. The current speed of MINet_VGG16 is ~35 FPS.