-
**Is your feature request related to a problem? Please describe.**
I'm trying to perform quantization. I converted the model using `quantize_qat`, but not able to figure out how to do training using …
-
when i execute `python quantization_aware_training.py`, it outputs the error that:
```bash
Traceback (most recent call last):
File "quantization_aware_training.py", line 81, in
model.load_s…
-
### 💡 Your Question
I tuned the quantization weights during training and got an onnx model with Q/DQ layers as output. However, when I use TensorRt to convert a file to an engine with int8 precision,…
-
I am trying to quantize a custom yolov5 model using quantization aware training method. I have applied all the model adjustments as provided in the vitis ai user guide v3.0 and wrote the QAT code base…
-
**For faster response**
You can @ the corresponding developers for your issue. Here is the division:
| Features | Developers …
-
1. Is the newly released 'TFLite Export with INT8 Quantization' only quantize the yolov8 backbone(or image encoder)? I note that you emphasis on 'Please use Reparameterized YOLO-World for TFLite!!' ,…
-
from the issue "https://developer.apple.com/forums/thread/740518 how do we use the computational power of A17 Pro Neural Engine?"
I learn that if i want to inference my mlmodel on my ipad pro with …
-
Firstly, thanks to all of you for the bravo project!
Currently, the model seems like does not support int8 quantization. Any plan on it?
-
### Describe the issue
I haved a pre-trained CNN model of tensorflow saved model and I convert it to **.onnx form** as well as a **static quantized .onnx form**, and their inference latency at the…
vonJJ updated
4 weeks ago
-
**Describe the bug**
I'm doing transfer learning and would like to (at the end) quantize my model. The problem is that when I try to use the _quantize_model()_ function (which is used successfully in…