-
# Model Request
### Which model would you like to see in the model zoo?
A quantized MobileNet (doesn't matter which version) could be fine. TensorFlow has published end to end quantized [MobileNet…
-
### 💡 Your Question
Hi,
I am just checking, I see in the provided results that Yolo-NAS-L does not suffer much reduction in performance going to Yolo-NAS-INT8-L. Can I check what exactly is meant …
-
-
1. Is the newly released 'TFLite Export with INT8 Quantization' only quantize the yolov8 backbone(or image encoder)? I note that you emphasis on 'Please use Reparameterized YOLO-World for TFLite!!' ,…
-
Review of *Guide to Quantization and Quantization Aware Training using the TensorFlow Model Optimization Toolkit*
> TF's model optimization Toolkit (TFMOT) contains tools that you can use to quanti…
-
Hello,
Firstly, thank you for your work on this repository. The implementation of Quantization Aware Training (QAT) and Post Training Quantization (PTQ) is very helpful.
I've been trying to appl…
-
## ⚙️ Request New Models
- Link to an existing implementation (e.g. Hugging Face/Github): https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf
- Is this model architecture supported by ML…
-
I want to use qat method for my model, but i can only find ptq quantizer in executorch, are there some examples of how to implement Quantization Aware Training (QAT) for qnn backend?
-
Create the QAT example based on ultralytics yolov8. The example should be added in `examples/quantization_aware_training/torch/yolov8/`
Motivation:
- Demonstrating NNCF QAT API
- Demonstrating th…
-
I have already trained a 32bit weight. Can I make it be a pretrain weight and train by quantization aware(by using tf.contrib.quantize.create_training_graph(quant_delay=300000))?
Can someone guide …