HanzhouLiu / DeblurDiNAT

Official implementation of the paper "DeblurDiNAT: A Lightweight and Effective Transformer for Image Deblurring".
https://arxiv.org/abs/2403.13163
Other
21 stars 1 forks source link

inf_time #7

Open txy00001 opened 3 days ago

txy00001 commented 3 days ago

I am very grateful for such a good defuzzing framework for open source, I would like to ask if there is a time-consuming indicator for the inference of the model? Is there an indicator of how fast I can reason on a generic video or image? Also, is there a script for reasoning general scenarios and daily scenes?

HanzhouLiu commented 2 days ago

As far as I know, the inference time should not include cpu/gpu loading time. We also need to have a warm-up process before counting the inference time. I have no idea if there is such a time-assessing indicator. I read some work they deploy the model on a mobile phone and measure the latency on the phone, which is closer to daily scenes. However, we have not done such a deployment/script yet. If you would like to reproduce the inference time we have in the paper, just run the test file and the terminal log will show you the time.