-
### Ⅰ. Issue Description
QAT_Engine-1.5.0 ,QAT驱动:QAT.L.4.23.0-00001,OpenSSL 1.1.1w
qat fw_counter计数不增长,停留在0值。
### Ⅱ. Describe what happened
qat fw_counter计数不增长,停留在0值。
### Ⅲ. Describe what you e…
-
Does qat engine support f-stack user space network development kit?
-
### 🐛 Describe the bug
from torch.ao.quantization.quantizer import (
XNNPACKQuantizer,
get_symmetric_quantization_config,
)
the code abve report error:
ImportError: cannot import name 'X…
-
### Ⅰ. Issue Description
QAT_Engine-0.6.19 ,QAT驱动:QAT.L.4.23.0-00001,OpenSSL 1.1.1w
使用tengine+qat硬件卸载时,客户端发起一次HTTPS请求,tengine的logs/error.log都会打印一条如下日志:
2024/04/16 19:20:37 [alert] 38566#0: *408…
-
Very appreciate having such a good job. I want to report a bug for cmd_sensitive_analysis in qat.py
When calling the **quantize.calibrate_model** function, a `device` parameter is underwritten, ca…
-
Hi,
I would like to do model inference and pytorch to onnx conversion of custom object detection model(not in mmdetection) after QAT. Can you please help me in sharing sample code for the same.
…
-
Since some model such MaskRCNN gains a lot speed up and very tiny precesion drop in int8 model, is there any plan to support automatically QAT training support in d2?
-
### 💡 Your Question
I tuned the quantization weights during training and got an onnx model with Q/DQ layers as output. However, when I use TensorRt to convert a file to an engine with int8 precision,…
-
- Nutrimatic
- QAT
- Onelook
(Can probably yoink from MBot - https://github.com/Moonrise55/Mbot )
-
## Description
I have seen two quantization librares built by Nvidia: a [TRT modelopt](https://github.com/NVIDIA/TensorRT-Model-Optimizer) and a[ pytorch-quantization](https://docs.nvidia.com/d…