-
Hi,
I trained YOLOv8 model and exported the model to ONNX format by the quantization_recipe below, I set weight_bits=8 and activation_bits=8 to ensure the full-flow inference of quantized model is …
-
Hello,
What will be the supported 4xxx QAT driver? Is qatlib supported?
-
378282246310005
-
大佬好,感谢你们出色的工具。
有一个问题就是现在ppq支持vit类模型QAT了嘛,我在QAT vit模型时LOSS是nan,验证集的accuracy是0,请问是要做一些其他的处理吗?还是说ppq现在无法支持vit模型的量化感知训练。
-
### 🐛 Describe the bug
shufflenet_v2_x1_0 QAT performance regression
model_name
qat_new
qat_old
qat ratio(new/old)
shufflenet_v2_x1_…
-
Names are number codes.
Correct meta and resubmit.
-
Hi,
I am trying to use Pytorch's native QAT instead of pytorch_nndct and then use Vitis AI's Quantization and Compilation for a VCK190. Is there a way to do this? If not would the new ONNX compati…
-
### 🐛 Describe the bug
When I try to fuse two modules (e.g., `Conv2d` and `BatchNorm2d`). I tried 1,000 times random input, finding that the output produced by `fuse_modules` is inconsistent with t…
-
Are there any runnable demos of using Sparse-QAT/PTQ (2:4) to accelerate inference, such as applying PTQ to a 2:4 sparse LLaMA for inference acceleration? I am curious about the potential speedup rati…
-
Due to OpenSSL v3.0 deprecations qatlib can no longer use AES_set_encrypt_key().
To resolve this, we'd like to request a new api for AES key reversal, i.e. expand the key and take the last round, s…