-
Hi,
Thank you for sharing your codebase. It helps me with developing a QAT method a lot!
However, I'm having trouble reproducing the results on WizardCoder and MetaMath models.
For WizardCod…
-
aimet version: 1.28
SNPE version: 2.14
deploy platform: SM8550 DSP w8 a8 bias32
I have a model, and the backbone of this model is MobileNetV3. As you know, MobileNetV3 primarily consists of poi…
-
I try to use AIMET1.25.0 to quantize keras model by QAT, but I got the warning below:
WARNING:tensorflow:Model failed to serialize as JSON. Ignoring... maximum recursion depth exceeded
tensorflow …
-
Hi all,
I believe I have encountered a bug with the optimization around the reduction of wake signals for qat offload which causes unneeded latency. The issue seems to be around the usage of thread…
-
Might be worthile to look into
```python
from torchao.float8 import convert_to_float8_training
convert_to_float8_training(model)
```
some time
-
ValueError occurs when I run quantization-aware-training using PACT.
The error message is "ValueError: saturation_min must be smaller than saturation_max" and it seems like the error occurs when the …
-
**System information**
- TensorFlow version (you are using):2.6
- Are you willing to contribute it (Yes/No):
**Motivation**
Need to run OD models on device, after QAT.
**Describe the feat…
-
Hi Everyone,
i built an NN with BatchNormalization layer and i have tried to quantize the whole model for EdgeTPU application. I have read that i can use this layer after Dense or Conv2D layer in t…
-
**System information**
- TensorFlow version (you are using):2.3.2
- Are you willing to contribute it (Yes/No):
**Motivation**
What is the use case and how does it broadly benefits users? Pri…
-
1. qat_export 中如何去掉量化反量化的算子
2. qat_export 中qat_mAP比训练中量化的模型qat训练时的val map低,已经对比了各种参数,没找到map下降的原因
照理说是一样的模型权重