-
Models with quantized weights (not input or output tensors), don't seem to work.
While a tflite model created with
```
converter = tf.lite.TFLiteConverter.from_saved_model("./model_path/")
tflit…
-
From SIGMICRO list:
`If you explicitly have TF_LITE_STATIC_MEMORY defined in a Make or build config, please update that build define to TF_LITE_MICRO. `
rafzi updated
3 years ago
-
In the VS Code extension, we have a way to detect a language using https://github.com/microsoft/vscode-languagedetection. It has a model embedded which comes from https://github.com/yoeo/guesslang. It…
-
### 1. System information
- Windows 11
- TensorFlow installation (pip package or built from source): pip
- TensorFlow library : 2.13
I am attempting to convert a QAT model trained with int8 we…
-
This application is working fine with mobilenet_quant_v1_224.tflite model. I've trained custom model following Tensorflow for Poet Google Codelab and created graph using this script:
IMAGE_SIZE=224
…
-
### 1. System information
- OS Platform and Distribution: Ubuntu 22.04
- TensorFlow installation (pip package or built from source): pip
- TensorFlow library (version, if pip package or github SH…
-
### 1. System information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 20.04
- TensorFlow installation (pip package or built from source): pip
- TensorFlow library (version, …
-
Hi there, I recently tried to convert your model to tf lite and run inference on it, but am experiencing some errors.
I'm using the following code to convert the model to tf lite:
```
model = L…
lukqw updated
3 years ago
-
- [ ] TF Lite arm version
- [ ] setup github action
- [ ] Wait for isar to be updated: https://github.com/isar/isar/issues/876
-