Open ogencoglu opened 2 weeks ago
We haven't really experimented much with ONNX so far. Though we do support export and once you export a model you can use an ONNX backend
If you wanna work through an example and post your progress here, happy to unblock you! We can add some example in the repo
Does quantized models here become quantized models in ONNX after conversion? Can you even convert/export them to ONNX? How about other way around? Can you export a sparse model to ONNX and quantize in ONNX afterwards?