Open xingjunhong opened 11 months ago
We test runtime on an RTX 3090 GPU with an input image size of 128 $\times$ 128. DAT: 171.71 ms DAT-S: 136.7 ms DAT-light: 49.8 ms
And sorry, I don’t know much about Ambarella CV22.
Thank you very much for your reply. May I also ask about the inference time on CPU and NPU?
Inference time on CPU (image size: 128 $\times$ 128) DAT-light: 440.3 ms DAT and DAT-S run very slowly on the CPU and are not recommended.
For npu, I have not used it.
Thank you for your answer. Do you know of any examples of super-resolution algorithms that have successfully deployed edge devices? Can real-time inference be achieved?
SRPO But they only release well-trained SRPO pth file, part code of for inference without model, and code for blending due to commercial reason.
Do you know what a lightweight model with good accuracy and fast inference speed is? I have searched a lot of information online, but the speed is quite slow
thank!
Is your network lightweight?
DAT-light is a lightweight model on GPU. However, our method is mainly explored on regular-sized models, that is, light-weight is implemented by simply reducing the model size. And there is also no optimization for edge devices.
Do you have a download address for the pre trained model on Baidu Netdisk?
We release download link of Baidu disk at https://github.com/zhengchen1999/DAT#models.
How big is the trained model DAT-light
43.7M It is shown on the disk.
Can I switch to the ONNX model?
Yes, you can. Phhofm release some pretrained weights with ONNX form in #3. But I'm not know how to realize it.
I would like to know the inference speed of the model, and can it be deployed on Anba CV22 for implementation?