zhengchen1999 / DAT

PyTorch code for our ICCV 2023 paper "Dual Aggregation Transformer for Image Super-Resolution"
Apache License 2.0
386 stars 37 forks source link

infer time #14

Open xingjunhong opened 11 months ago

xingjunhong commented 11 months ago

I would like to know the inference speed of the model, and can it be deployed on Anba CV22 for implementation?

zhengchen1999 commented 11 months ago

We test runtime on an RTX 3090 GPU with an input image size of 128 $\times$ 128. DAT: 171.71 ms DAT-S: 136.7 ms DAT-light: 49.8 ms

And sorry, I don’t know much about Ambarella CV22.

xingjunhong commented 11 months ago

Thank you very much for your reply. May I also ask about the inference time on CPU and NPU?

zhengchen1999 commented 11 months ago

Inference time on CPU (image size: 128 $\times$ 128) DAT-light: 440.3 ms DAT and DAT-S run very slowly on the CPU and are not recommended.

For npu, I have not used it.

xingjunhong commented 11 months ago

Thank you for your answer. Do you know of any examples of super-resolution algorithms that have successfully deployed edge devices? Can real-time inference be achieved?

zhengchen1999 commented 11 months ago

SRPO But they only release well-trained SRPO pth file, part code of for inference without model, and code for blending due to commercial reason.

xingjunhong commented 11 months ago

Do you know what a lightweight model with good accuracy and fast inference speed is? I have searched a lot of information online, but the speed is quite slow

zhengchen1999 commented 11 months ago

I suggest you find some methods from NTIRE Challenge: Efficient Super-Resolution (2022, 2023). But I don't know much about lightweight models on edge devices. A model that works well on the GPU may be slow on the CPU or NPU.

xingjunhong commented 11 months ago

thank!

xingjunhong commented 11 months ago

Is your network lightweight?

zhengchen1999 commented 11 months ago

DAT-light is a lightweight model on GPU. However, our method is mainly explored on regular-sized models, that is, light-weight is implemented by simply reducing the model size. And there is also no optimization for edge devices.

xingjunhong commented 11 months ago

Do you have a download address for the pre trained model on Baidu Netdisk?

zhengchen1999 commented 11 months ago

We release download link of Baidu disk at https://github.com/zhengchen1999/DAT#models.

xingjunhong commented 11 months ago

How big is the trained model DAT-light

zhengchen1999 commented 11 months ago

43.7M It is shown on the disk.

xingjunhong commented 11 months ago

Can I switch to the ONNX model?

zhengchen1999 commented 11 months ago

Yes, you can. Phhofm release some pretrained weights with ONNX form in #3. But I'm not know how to realize it.