-
## 🚀 Feature
### Motivation
This is a request from a user on [Slack](https://pytorch-lightning.slack.com/archives/CRBLFHY79/p1661841050909629).
In their use case, they need to transform th…
-
**Describe the bug**
tensorflow_model_optimization.quantization.keras.quantize_model function throws the following error with mobilenet v3.
```
Traceback (most recent call last):
File "issu…
ejcgt updated
2 years ago
-
您好 看您的实验结果有在yolox上应用efl/eqfl的结果?请问有开源的代码么?
-
### Description of the Feature:
torch QAT supports three mode.
* Eager Mode Quantization
* FX Graph Mode Quantization
* PyTorch 2 Export Quantization
detail in https://pytorch.org/docs/stabl…
-
We are using intel Saffhire Rapids processor with integrated QAT processors.
After OS installation/, we are configuring QAT config files /etc directory and then starting the qat service using "/etc/i…
-
Hi,
This is more of a question than a feature request which I don't know where else to post.
So I'm trying to perform quantization aware training to a model that's not of tf.keras.Model type but o…
-
### Description of the bug:
Hello,
I'm encountering an issue when trying to export a model to tflite with quantization. It appears that the tensor shapes are being altered incorrectly somewher…
-
hi there
according to the documentation
https://github.com/analogdevicesinc/ai8x-training#quantization-aware-training-qat
we can use either QAT or post quantization but can I use both of them? if …
-
Thanks for sharing the repo . It is really helpful.
I'm exploring ways to do the optimization on GPU. I know its not presently supported. Could you share some approach or references to implement th…
-
```
\VITS-fast-fine-tuning>python finetune_speaker_v2.py -m ./OUTPUT_MODEL --max_epochs 1000 --drop_speaker_embed True
INFO:OUTPUT_MODEL:{'train': {'log_interval': 10, 'eval_interval': 100, 'see…