thu-nics / MixDQ

[ECCV24] MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantization
https://a-suozhang.xyz/mixdq.github.io/
31 stars 4 forks source link

does onnx can support the model an inference with onnxruntime? #3

Open dragen1860 opened 5 months ago

dragen1860 commented 5 months ago

Hi, dear author: The memory reduce is very attractive and will benefits its application. I wonder does current onnx support the techniques you proposed and inference with onnxruntime framework?

A-suozhang commented 5 months ago

Thank you for your interest in our work! We haven't tried ONNXRuntime yet, we think it is applicable. MixDQ adopts the standard and deployment-friendly quantization scheme, We have already tested MixDQ with the pytorch_quantization deployment tool.

A-suozhang commented 5 months ago

If you are interested in deploying MixDQ with onnxrumtime or other tool,s we are also open for discussion and support, PRs are welcomed!