Closed mlopez0 closed 3 years ago
Hi @mlopez0, Can you share the exact error log, also can you double check that you are using the same .tflite file in both your Android and Flutter versions.
Thanks for the fast response @am15h, I really appreciate your time and effort!
Yes, I'm using the same final_model.tfilte, in both projects.
When I do flutter run -v
I just got this
[ +106 ms] I/CameraManagerGlobal( 5945): Connecting to camera service
[ +649 ms] W/Gralloc4( 5945): allocator 3.x is not supported
[ +358 ms] I/tflite ( 5945): Initialized TensorFlow Lite runtime.
[ +887 ms] E/flutter ( 5945): [ERROR:flutter/runtime/dart_isolate.cc(988)] Unhandled exception:
[ ] E/flutter ( 5945): Bad state: failed precondition
[ ] E/flutter ( 5945): #0 checkState (package:quiver/check.dart:71:5)
[ +4 ms] E/flutter ( 5945): #1 Tensor.setTo (package:tflite_flutter/src/tensor.dart:150:5)
[ ] E/flutter ( 5945): #2 Interpreter.runForMultipleInputs (package:tflite_flutter/src/interpreter.dart:194:33)
[ ] E/flutter ( 5945): #3 Classifier.predict (package:object_detection/tflite/classifier.dart:137:18)
[ ] E/flutter ( 5945): #4 IsolateUtils.entryPoint (package:object_detection/utils/isolate_utils.dart:45:51)
[ ] E/flutter ( 5945): <asynchronous suspension>
[+1344 ms] W/System ( 5945): A resource failed to call release.
[+2014 ms] I/bject_detectio( 5945): Background concurrent copying GC freed 574(82KB) AllocSpace objects, 24(1840KB) LOS objects, 49% free, 3388KB/6777KB, paused 33us total 146.714ms
[+2678 ms] I/bject_detectio( 5945): Background concurrent copying GC freed 721(82KB) AllocSpace objects, 27(1956KB) LOS objects, 49% free, 4341KB/8683KB, paused 30us total 426.004ms
The app keep running with no detection.
The only modification I did to the project was in the classifier.dart
file line 20 & 21
static const String MODEL_FILE_NAME = "final_model.tflite";
static const String LABEL_FILE_NAME = "final_model.txt";
Here's also a screenshot when I debug the application, with the exception I mentioned in my previous message.
Thanks for the support!
I am facing the same error as in the image.
I/flutter ( 8266): Caught error: Unable to load asset: assets/assets/efficientnet-lite3/efficientnet-lite3-fp32.tflite, while trying to load asset from "assets/assets/efficientnet-lite3/efficientnet-lite3-fp32.tflite"
I/flutter ( 8266): Caught error: NoSuchMethodError: The getter 'buffer' was called on null.
I/flutter ( 8266): Receiver: null
I/flutter ( 8266): Tried calling: buffer, while trying to create interpreter from asset: assets/efficientnet-lite3/efficientnet-lite3-fp32.tflite
E/flutter ( 8266): [ERROR:flutter/lib/ui/ui_dart_state.cc(177)] Unhandled Exception: NoSuchMethodError: The getter 'length' was called on null.
E/flutter ( 8266): Receiver: null
E/flutter ( 8266): Tried calling: length
E/flutter ( 8266): #0 Object.noSuchMethod (dart:core-patch/object_patch.dart:51:5)
E/flutter ( 8266): #1 new Model.fromBuffer
package:tflite_flutter/src/model.dart:32
E/flutter ( 8266): #2 new Interpreter.fromBuffer
package:tflite_flutter/src/interpreter.dart:90
E/flutter ( 8266): #3 Interpreter.fromAsset
package:tflite_flutter/src/interpreter.dart:114
E/flutter ( 8266): <asynchronous suspension>
E/flutter ( 8266): #4 _ImagePageState.build.<anonymous closure>
Flutter doctor
$ flutter doctor -v
[✓] Flutter (Channel stable, 1.22.4, on Linux, locale en_IN)
• Flutter version 1.22.4 at /media/pi/Pi/Code/DontWaitApp/flutter
• Framework revision 1aafb3a8b9 (3 weeks ago), 2020-11-13 09:59:28 -0800
• Engine revision 2c956a31c0
• Dart version 2.10.4
[✓] Android toolchain - develop for Android devices (Android SDK version 29.0.3)
• Android SDK at /home/pi/Android/Sdk
• Platform android-29, build-tools 29.0.3
• Java binary at: /opt/android-studio/jre/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b3-6222593)
• All Android licenses accepted.
[✓] Android Studio (version 4.0)
• Android Studio at /opt/android-studio
• Flutter plugin version 47.1.2
• Dart plugin version 193.7361
• Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b3-6222593)
[✓] Connected device (1 available)
• Redmi Note 5 Pro (mobile) • c548aae1 • android-arm64 • Android 9 (API 28)
• No issues found!
[ +106 ms] I/CameraManagerGlobal( 5945): Connecting to camera service [ +649 ms] W/Gralloc4( 5945): allocator 3.x is not supported [ +358 ms] I/tflite ( 5945): Initialized TensorFlow Lite runtime. [ +887 ms] E/flutter ( 5945): [ERROR:flutter/runtime/dart_isolate.cc(988)] Unhandled exception: [ ] E/flutter ( 5945): Bad state: failed precondition [ ] E/flutter ( 5945): #0 checkState (package:quiver/check.dart:71:5) [ +4 ms] E/flutter ( 5945): #1 Tensor.setTo (package:tflite_flutter/src/tensor.dart:150:5) [ ] E/flutter ( 5945): #2 Interpreter.runForMultipleInputs (package:tflite_flutter/src/interpreter.dart:194:33) [ ] E/flutter ( 5945): #3 Classifier.predict (package:object_detection/tflite/classifier.dart:137:18) [ ] E/flutter ( 5945): #4 IsolateUtils.entryPoint (package:object_detection/utils/isolate_utils.dart:45:51) [ ] E/flutter ( 5945):
[+1344 ms] W/System ( 5945): A resource failed to call release. [+2014 ms] I/bject_detectio( 5945): Background concurrent copying GC freed 574(82KB) AllocSpace objects, 24(1840KB) LOS objects, 49% free, 3388KB/6777KB, paused 33us total 146.714ms [+2678 ms] I/bject_detectio( 5945): Background concurrent copying GC freed 721(82KB) AllocSpace objects, 27(1956KB) LOS objects, 49% free, 4341KB/8683KB, paused 30us total 426.004ms
@mlopez0 In your case, I suspect that the shape(or type) of input and output buffers that you are passing into the model, is different than that expected by the model. Can you please try out what I mention in this https://github.com/am15h/tflite_flutter_plugin/issues/29#issuecomment-663587710.
I/flutter ( 8266): Caught error: Unable to load asset: assets/assets/efficientnet-lite3/efficientnet-lite3-fp32.tflite, while trying to load asset from "assets/assets/efficientnet-lite3/efficientnet-lite3-fp32.tflite"
@piyushchauhan Can you please check if you have mentioned your asset in pubspec.yaml
and the path is correct?
@am15h Thanks for the fast response and support!
I read the thread multiples times to understand the problem, and I discovered that you were right! the type is incorrect.
My custom model
The output of var inputT = _interpreter.getInputTensors();
is:
[Tensor{
_tensor: Pointer<TfLiteTensor>: address=0x7db8063b80,
name: normalized_input_image_tensor,
type: TfLiteType.float32,
shape: [1, 300, 300, 3],
data: 1080000
]
and the output of var outputT = _interpreter.getOutputTensors();
is:
[
Tensor{
_tensor: Pointer<TfLiteTensor>:
address=0x7db8063800,
name: TFLite_Detection_PostProcess,
type: TfLiteType.float32,
shape: [1, 10, 4],
data: 160,
Tensor{
_tensor: Pointer<TfLiteTensor>: address=0x7db8063870,
name: TFLite_Detection_PostProcess:1,
type: TfLiteType.float32,
shape: [1, 10],
data: 40,
Tensor{
_tensor: Pointer<TfLiteTensor>: address=0x7db80638e0,
name: TFLite_Detection_PostProcess:2,
type: TfLiteType.float32,
shape: [1, 10],
data: 40,
Tensor{
_tensor: Pointer<TfLiteTensor>: address=0x7db8063950,
name: TFLite_Detection_PostProcess:3,
type: TfLiteType.float32,
shape: [1],
data: 4
]
The difference are in the data
and type
, I'm sending a float32 and a uint8 is required.
I replaced the line 124:
List<Object> inputs = [inputImage.buffer];
with this one:
List<Object> inputs = [inputImage.buffer.asUint8List()];
But I'm still getting the same error: 'failed precondition'
Am I doing it right or do I need to replace something else?
Regading the NormalizeOp...
I added the line NormalizeOp to the current Pre-process
TensorImage getProcessedImage(TensorImage inputImage) {
padSize = max(inputImage.height, inputImage.width);
if (imageProcessor == null) {
imageProcessor = ImageProcessorBuilder()
.add(ResizeWithCropOrPadOp(padSize, padSize))
.add(NormalizeOp(0.0, 0.03))
.add(ResizeOp(INPUT_SIZE, INPUT_SIZE, ResizeMethod.BILINEAR))
.build();
}
inputImage = imageProcessor.process(inputImage);
return inputImage;
}
But now I'm getting the following error message
(Bad state: TensorImage is holding a float-value image which is not able to convert a Image.)
if (_bufferImage.getDataType() != TfLiteType.uint8) {
throw StateError(
"TensorImage is holding a float-value image which is not able to convert a Image.");
}
So, Is not clear for me where should I make the change.
Update:
After a long debug I managed to solve the issues, this is the only change I did, (add the NormalizeOp() ):
TensorImage getProcessedImage(TensorImage inputImage) {
padSize = max(inputImage.height, inputImage.width);
if (imageProcessor == null) {
imageProcessor = ImageProcessorBuilder()
.add(ResizeWithCropOrPadOp(padSize, padSize))
.add(NormalizeOp(0.0, 0.03))
.add(ResizeOp(INPUT_SIZE, INPUT_SIZE, ResizeMethod.BILINEAR))
.build();
}
inputImage = imageProcessor.process(inputImage);
return inputImage;
}
But now I'm dealing with a different issue. The model does not detect almost anything, the confidence is very very low, I lower the Threshold to 0.1 to debug. What might be wrong?
There are two screenshot, of the same mode. One on them running in Android Studio (Native) and the other of my Flutter App
Hi, I just face this problems.
Awesome setting the mean to 127.5 and stderr 127.5 solved my issue.
Thanks for all the support and help @am15h @kietdinh
I follow along this Colab to train a custom model.
Conversion process Colab
After completing the training process I converted the .pb to .tflite and I got these files. When I loaded the model into the official Android demo I got the following error:
The solution to that error is discussed in this issue
I followed the solution:
Installing the metadata library with:
and then executing the following command into python
And I got the following warning:
Back to Android Studio example I was able to run the model successfully, by just modifying this information:
But when I bring my model to my Flutter application, I got the following exeption:
The flutter project is fully working with coco_ssd_mobilenet_v1_1.0_quant_2018_06_29
I also tried to re-run the metadata commands on my flutter application assets but still the same issue.
What am I missing?
Edited: In Android Studio (Android App) I set the variable IS_MODEL_QUANTIZED to false, is the issue related to this?