-
**Describe the bug**
I'm doing transfer learning and would like to (at the end) quantize my model. The problem is that when I try to use the _quantize_model()_ function (which is used successfully in…
-
Hi,
I see that TF version 1's QAT library uses **zero_debias** as True. However, the TFMOT's quantization library calls the following in the quant_ops.py:
_assign_min = moving_averages.assign_mov…
-
**System information**
- TensorFlow version (you are using): tensorflow-2.3.0
- Are you willing to contribute it (Yes/No):No
RuntimeError: Layer tf_op_layer_ResizeNearestNeighbor: is not suppor…
-
Prior to filing: check that this should be a bug instead of a feature request. Everything supported, including the compatible versions of TensorFlow, is listed in the overview page of each technique. …
-
Hi,
I have a pretrained detection model I trained in Tensorflow 2.3 with fp32 precision. I used this model's weights as initial pretrained weights for Quantization Aware Training (QAT). During traini…
-
The [toturial](https://pytorchvideo.org/docs/tutorial_accelerator_build_your_model) shows how to build an efficient network with modules provided by "pytorchvideo.layers.accelerator" and how to conver…
-
Hi, I'm working on applying QAT on a model. I made the necessary modifications. However, when I looked into one of the saved checkpoint `.pth` files, I observed that none of the weights were actually …
-
### System information
- **Have I written custom code (as opposed to using a stock example script
provided in TensorFlow)**:
Custom code
- **OS Platform and Distribution (e.g., Linux U…
-
Hi, I'm trying to train a QAT model using 16 bit activations and 8 bit weights in order to run it on DSP on Snapdragon 888.
The training works as it should and the model is converging. When running t…
-