pp00704831 / Stripformer-ECCV-2022-

Other
90 stars 13 forks source link

About inference time #6

Closed c-yn closed 1 year ago

c-yn commented 1 year ago

Dear author, I find that the inference speed reported in your paper is very fast. Do you think the testing time should be computed using torch.cuda.synchronize() as mentioned in the repository of MIMO-UNet? After adding this command, the speed decreases dramatically. Look forward to your opinion. Thanks.

pp00704831 commented 1 year ago

Hello,

As mentioned in the repository of MIMO-UNet, most of the papers in image deblurring compare the asynchronous time, so we also measure in asynchronous time.

I compare Stripformer with SOTAs on the GoPro test set with a single RTX3090 GPU, Stripformer is still faster than these methods in synchronous time and achieves a good balance between Params, GFLOPs, and Time. The GFLOPs are measured on a 256x256 patch.

Methods PSNR Parms (M) GFLOPs Async-Time (ms) Sync-Time (ms)
MPRNet (CVPR 2021) 32.66 20 760 148 1410
MIMO-UNet++ (ICCV 2021) 32.68 16 616 49 1115
Restormer (CVPR 2022) 32.92 26 140 350 1116
Stripformer 33.08 20 170 52 800
c-yn commented 1 year ago

Very much thanks for your additional evaluation.