shaqian / flutter_tflite

Flutter plugin for TensorFlow Lite
https://pub.dartlang.org/packages/tflite
MIT License
631 stars 403 forks source link

Using quantized tflite models #55

Open bjoernholzhauer opened 5 years ago

bjoernholzhauer commented 5 years ago

When I substitute a quantized model into code that works for image classification with the non-quantized model (I simply substituted 'mobilenet_v2_1.0_224_quant.tflite' for 'Mobilenet_V2_1.0_224'), I get: Caused by: java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite tensor with type UINT8 and a Java object of type [[F (which is compatible with the TensorFlowLite type FLOAT32).

Is it possible to use quantized models? If so, how (would be good to have something in the documentation)? If not, would also be good if the documentation said so - or if it is just not possible at the moment, but hopefully in the future, I guess consider this a feature request.

securingsincity commented 4 years ago

Based on some searching of issues last night #53 and #59 both are related to this issue. The automl vision edge outputs a quantized tflite model.

Here are two images from netron describing the differences between the quantized model and the mobilenet v2 model that flutter_tflite currently supports

Screen Shot 2019-10-19 at 9 44 55 PM Screen Shot 2019-10-19 at 9 44 37 PM

Note that they are the same except one accepts a uint8 list and the other takes a float32 list.

I'm not entirely sure what would need to change on the flutter tflite side to support this kind of model but hopefully this helps

Statyk7 commented 4 years ago

I'm having the same problem and wondering if there is a way or workaround to handle those models?

waltermaldonado commented 4 years ago

For what I have seen, if you use the method to run the detections on binary you can use a quantized model. Actually, the conversion from image to ByteList suggested in the docs is made considering 8-bit integer as unit, as you can see below:

Uint8List imageToByteListUint8(img.Image image, int inputSize) {
  var convertedBytes = Uint8List(1 * inputSize * inputSize * 3);
  var buffer = Uint8List.view(convertedBytes.buffer);
  int pixelIndex = 0;
  for (var i = 0; i < inputSize; i++) {
    for (var j = 0; j < inputSize; j++) {
      var pixel = image.getPixel(j, i);
      buffer[pixelIndex++] = img.getRed(pixel);
      buffer[pixelIndex++] = img.getGreen(pixel);
      buffer[pixelIndex++] = img.getBlue(pixel);
    }
  }
  return convertedBytes.buffer.asUint8List();
}

This conversion should work for a quantized model, but is not working for a non-quantized one. The convertedBytes List should be 4 times the one is being suggested to work for non-quantized models.

When I use the detections on image path it works perfectly.

Edit: For non-quantized models the docs suggest:

Uint8List imageToByteListFloat32(
    img.Image image, int inputSize, double mean, double std) {
  var convertedBytes = Float32List(1 * inputSize * inputSize * 3);
  var buffer = Float32List.view(convertedBytes.buffer);
  int pixelIndex = 0;
  for (var i = 0; i < inputSize; i++) {
    for (var j = 0; j < inputSize; j++) {
      var pixel = image.getPixel(j, i);
      buffer[pixelIndex++] = (img.getRed(pixel) - mean) / std;
      buffer[pixelIndex++] = (img.getGreen(pixel) - mean) / std;
      buffer[pixelIndex++] = (img.getBlue(pixel) - mean) / std;
    }
  }
  return convertedBytes.buffer.asUint8List();
}
Statyk7 commented 4 years ago

Have you been able to run the MobileNet quantized version? Can be found here: https://www.tensorflow.org/lite/guide/hosted_models I have no success with Mobilenet_V1_1.0_224_quant :( I have tried with runModelOnImage and with runModelOnBinary using the image to byte functions... no results... (and no errors)

But when using the TensorFlow iOS Sample App it works just fine! https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification/ios

waltermaldonado commented 4 years ago

No, I've never tried those models, but I think them should work aswell. Let us see your code, maybe we can find something...

Statyk7 commented 4 years ago

I'm using the example provided with the tflite package: https://github.com/shaqian/flutter_tflite/tree/master/example

With an additional asset for the model (the labels are the same than for the non-quantized model) in pubspec.yaml: - assets/mobilenet_v1_1.0_224_quant.tflite

Then I load the quantized model instead of the non-quantized one in main.dart:loadModel: default: res = await Tflite.loadModel( model: "assets/mobilenet_v1_1.0_224_quant.tflite", labels: "assets/mobilenet_v1_1.0_224.txt",

That's it!

waltermaldonado commented 4 years ago

Just to clarify, is your non-quantized model a detection model (localization + classification)? Because it seems to me that those quantized models are classification only models.

Statyk7 commented 4 years ago

It's an image classification model I believe...

Ehtasha commented 4 years ago

@Statyk7 @waltermaldonado

I'm integrating my own custom model in this example but the app crashes when we send an image to the model using method segmentMobileNet.

I have also tried with [runModelOnBinary] but issue still stand.

My custom model is trained on PyTorch and I have converted into Tensorflow using onnx and then in .tflite. Model is not quantized.

yumemi-RyoShimizu commented 4 years ago

I created an image labeling model with AutoML, and since the model should have been quantized, I converted the image to uint8, but the following error was output. Caused by: java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite tensor with type UINT8 and a Java object of type [(F (which is compatible with the TensorFlowLite type FLOAT32) .

andrsdev commented 4 years ago

I'm having the same problem!!! is there any updates of this?

andrsdev commented 4 years ago

Here are my Auto ML properties

Throws error Caused by: java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite tensor with type UINT8 and a Java object of type [(F (which is compatible with the TensorFlowLite type FLOAT32) .

Screen Shot 2020-02-28 at 10 55 18 AM
oncul commented 4 years ago

Do u have problem with AutoMl generated tflite file on ios?

L-is-0 commented 4 years ago

@AndrsDev I have the same error here

joknjokn commented 4 years ago

Did anyone find a solution to this?

Also getting: java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite tensor with type UINT8 and a Java object of type [(F (which is compatible with the TensorFlowLite type FLOAT32)

I'm trying with this model, and livestreamed camera image (YUV on android): https://tfhub.dev/google/lite-model/aiy/vision/classifier/birds_V1/2

The page states:

Inputs are expected to be 3-channel RGB color images of size 224 x 224, scaled to [0, 1]. This model outputs to image_classifier.

I've tried a million things now and I can't get it to work. If I try to convert the streamed image to RGB, I get the UINT8/FLOAT32-error above.

Bryanx commented 4 years ago

I solved this by using tflite_flutter and tflite_flutter_helper instead of this library. Here is a gist in case anyone is running into this as well: https://gist.github.com/Bryanx/b839e3ceea0f9647ffbc5f90e3091742.

tobiascornille commented 3 years ago

@Bryanx Do you think tflite_flutter_helper alone would solve the issue? I.e. is it compatible with this library?

zoraiz-WOL commented 2 years ago

use this code to train you custom model

import os

import numpy as np

import tensorflow as tf assert tf.version.startswith('2')

from tflite_model_maker import model_spec from tflite_model_maker import image_classifier from tflite_model_maker.config import ExportFormat from tflite_model_maker.config import QuantizationConfig from tflite_model_maker.image_classifier import DataLoader

import matplotlib.pyplot as plt

to unzip a rar

!unzip path-of-zip-file -d path-to-save-extract-file

data = DataLoader.from_folder('path-of-custom-folder') train_data, rest_data = data.split(0.8) validation_data, test_data = rest_data.split(0.5) model = image_classifier.create(train_data, validation_data=validation_data) loss, accuracy = model.evaluate(test_data) config = QuantizationConfig.for_float16() model.export(export_dir='path-to-save-model', quantization_config=config,export_format=ExportFormat.TFLITE) model.export(export_dir='path-to-save-label', quantization_config=config,export_format=ExportFormat.LABEL)

aboubacryba commented 2 years ago

use this code to train you custom model

import os

import numpy as np

import tensorflow as tf assert tf.version.startswith('2')

from tflite_model_maker import model_spec from tflite_model_maker import image_classifier from tflite_model_maker.config import ExportFormat from tflite_model_maker.config import QuantizationConfig from tflite_model_maker.image_classifier import DataLoader

import matplotlib.pyplot as plt

to unzip a rar !unzip path-of-zip-file -d path-to-save-extract-file

data = DataLoader.from_folder('path-of-custom-folder') train_data, rest_data = data.split(0.8) validation_data, test_data = rest_data.split(0.5) model = image_classifier.create(train_data, validation_data=validation_data) loss, accuracy = model.evaluate(test_data) config = QuantizationConfig.for_float16() model.export(export_dir='path-to-save-model', quantization_config=config,export_format=ExportFormat.TFLITE) model.export(export_dir='path-to-save-label', quantization_config=config,export_format=ExportFormat.LABEL)

You just saved my life. Thank You !!!!!