-
Since some model such MaskRCNN gains a lot speed up and very tiny precesion drop in int8 model, is there any plan to support automatically QAT training support in d2?
-
### **System info**
OS: Linux u2404 6.8.0-31-lowlatency #31.1-Ubuntu
gcc: gcc version 13.2.0 (Ubuntu 13.2.0-23ubuntu4)
### **Problem**
**No problem when executing ./configure**
```
# ./c…
-
Using QAT.L.4.24.0-00005.tar.gz to build from : https://www.intel.com/content/www/us/en/download/19734/intel-quickassist-technology-driver-for-linux-hw-version-1-x.html
```
+ export KERNEL_RELEASE…
-
in the base_quantizier.py, there are these words:PyTorch Function that can be used for asymmetric quantization (also called uniform affine
quantization). Quantizes its argument in the forward pas…
-
-
Hello,
I would like to train my model in a QAT scenario.
But from what I understand, during QAT, only the Forward pass calculations are done in quantized mode, whereas the weights that are saved are…
-
Hii,
Thanks for the great work. I want to convert bevfusion swinT+voxelnet model to onnx and evaluate the performance.
I trained my bevfusion model with bevfusion/configs/nuscenes/det/transfusion/se…
-
As title
-
I'm using torchtune for model quantization with QAT. Currently, I am learning based on https://pytorch.org/torchtune/main/tutorials/qat_finetune.html, but the results of the prepared_model I printed a…
-
Hello,
What will be the supported 4xxx QAT driver? Is qatlib supported?