-
I am trying to implement the quantize version of MobilenNet v1 in OpenCL. I have referenced the method that you have provided in https://arxiv.org/pdf/1712.05877.pdf . I am using pretrained Mobilnet w…
-
Is there already a plan to add binary ops like bitcount for [XNOR-NET](http://arxiv.org/abs/1603.05279)?
bhack updated
4 years ago
-
We are proposing a new dialect named `QNN`, that introduces a quantized version of existing relay operators. The goal is to support the models that have been pre-quantized in the framework.
Some i…
-
**Is your feature request related to a problem? Please describe.**
Nowadays, there is a need to take the floating point models that have been trained and deploy them to edge devices. One way that is …
-
Hello,
Im trying to quantise my CNN classification model using a simple post training quantisation. The paper by Ramakrishnan on Quantisizing CNN for efficient inference suggests that we can try t…
-
**System information**
- OS Platform and Distribution: Google coral board aarch64
- TensorFlow lite installed from source
- TensorFlow version: master branch
If I run the script ./tensorflow/lit…
-
Let me reference @ajtulloch 's [comment](https://github.com/dmlc/tvm/pull/2116#issuecomment-444694200) about quantization workflow firstly:
> 1. Implement a model in a standard ML framework, genera…
-
## 🐛 Bug
When building PyTorch from source on latest version of MacOS (10.14.6), I get an error (full error message and output of `collect_env.py` below).
## To Reproduce
Steps to reproduce t…
-
The binary file support for quantized values, described in the spec, looks pretty good, and I see handling of quantization in the nnef tensorflow exporter.
https://github.com/KhronosGroup/NNEF-To…
-
To increase quantization support in TVM, it is necessary to support the pre-quantized models, i.e., the models that have been quantized in the framework itself (outside of Relay). In this issue, we ar…