Open iammohit1311 opened 1 year ago
Hey! So this is for image classification, yeah? Since it's a costume model, would you mind running it through this tool that we have to see if it works there? If it works there, then can you share a link to your code up on GitHub (unless you're using an unmodified version of one of the examples) as well as a link to the model to investigate further?
Thanks!
Hi! Thank you for replying. This is for live object detection. I have tried running my model on Android (Kotlin) and it works perfectly. I have tested it in Jupyter Notebook as well, works well. I have used the unmodified version of this repository's live_object_detection_ssd_mobilenet example. The model is not supposed to be open source, please drop your E-mail so I can mail you instead!
@PaulTR I use Google Teachable Machine trained model with live_object_detection_ssd_mobilenet example and get the same error!
nvalid argument(s): Output object shape mismatch, interpreter returned output of shape: [1, 2] while shape of output provided as argument in run is: [1, 10, 4]
How to configure the example to run Google Teachable Machine trained model?
Ah. I have no idea, I've never used Google Teachable Machine, so I don't know what it's outputting that's different. When you say you've run this on Android, are you using the Task Library, or direct TensorFlow Lite inference? Task Library does some stuff behind the hood to figure out shapes and work correctly that might not translate as well to this without knowing what exactly your model does. Unfortunately that might fall a bit outside of the scope of what we can help with here.
Ah. I have no idea, I've never used Google Teachable Machine, so I don't know what it's outputting that's different. When you say you've run this on Android, are you using the Task Library, or direct TensorFlow Lite inference? Task Library does some stuff behind the hood to figure out shapes and work correctly that might not translate as well to this without knowing what exactly your model does. Unfortunately that might fall a bit outside of the scope of what we can help with here.
@PaulTR I am using direct TensorFlow Lite inference. Also, my model is simply trained using TensorFlow 2.x
I have this issue as well, trained a custom model using Teachable Machine and applied it to my Flutter app using the live object detection model provided in this repo. When I run the app on my physical iPhone, this error pops up: "[VERBOSE-2:dart_isolate.cc(1098)] Unhandled exception: Invalid argument(s): Output object shape mismatch, interpreter returned output of shape: [1, 36] while shape of output provided as argument in run is: [1, 10, 4]"
Any idea around this? Perhaps I should change the output arguments to match the custom-trained model?
I have this issue as well, trained a custom model using Teachable Machine and applied it to my Flutter app using the live object detection model provided in this repo. When I run the app on my physical iPhone, this error pops up: "[VERBOSE-2:dart_isolate.cc(1098)] Unhandled exception: Invalid argument(s): Output object shape mismatch, interpreter returned output of shape: [1, 36] while shape of output provided as argument in run is: [1, 10, 4]"
Any idea around this? Perhaps I should change the output arguments to match the custom-trained model?
You found something ? I would recommend testing your model on Android (Kotlin) project just to ensure if it is running properly there. If it does, it's only an issue with this repository since it is under development
I have this issue as well 归档.zip
same error when used my coin model made from youtube
Does anyone know how to fix the problem I am using the efficient-det-0 lite model. i have test the model on the media pipe example project code and it works over there but on flutter plugin i get this error:
ERROR:flutter/runtime/dart_isolate.cc(1097)] Unhandled exception: E/flutter (10232): Invalid argument(s): Output object shape mismatch, interpreter returned output of shape: [1, 25] while shape of output provided as argument in run is: [1, 25, 4] E/flutter (10232): #0 Tensor._duplicateList (package:tflite_flutter/src/tensor.dart:233:7) E/flutter (10232): #1 Tensor.copyTo (package:tflite_flutter/src/tensor.dart:203:7) E/flutter (10232): #2 Interpreter.runForMultipleInputs (package:tflite_flutter/src/interpreter.dart:183:24) E/flutter (10232): #3 _DetectorServer._runInference (package:live_object_detection_ssd_mobilenet/service/detector_service.dart:363:19) E/flutter (10232): #4 _DetectorServer.analyseImage (package:live_object_detection_ssd_mobilenet/service/detector_service.dart:280:20) E/flutter (10232): #5 _DetectorServer._convertCameraImage.<anonymous closure> (package:live_object_detection_ssd_mobilenet/service/detector_service.dart:242:25) E/flutter (10232): <asynchronous suspension>
The input tensor to efficient-det is 320x320
I have this issue as well, trained a custom model using Teachable Machine and applied it to my Flutter app using the live object detection model provided in this repo. When I run the app on my physical iPhone, this error pops up: "[VERBOSE-2:dart_isolate.cc(1098)] Unhandled exception: Invalid argument(s): Output object shape mismatch, interpreter returned output of shape: [1, 36] while shape of output provided as argument in run is: [1, 10, 4]" Any idea around this? Perhaps I should change the output arguments to match the custom-trained model?
You found something ? I would recommend testing your model on Android (Kotlin) project just to ensure if it is running properly there. If it does, it's only an issue with this repository since it is under development
Sorry, I hadn't realized there was a comment on my comment. I don't have access to a physical Android device at the moment ^^", but I appreciate the suggestion.
Any updates regarding this issue? or its root?
Same problem. I use a custom model on ssd-mobilenet-v2. I checked it on mediapipe-studio and its works.
I solved it so in your code you will have output object/tensor in form of some array
Now order of that 2-3 array should be same as of model. Try and test various order .
@ysumiit005 hello, what did you do to solve it?
Hi @ysumiit005! We are experiencing the same problem, is it okay if you share how you solved it? Thank you!
Hi @iammohit1311! I am currently experiencing the same problem, did you already manage to solve it?
At example live object detection in file detector_service.dart we have
final output = { 0: [List<List<num>>.filled(10, List<num>.filled(4, 0))], 1: [List<num>.filled(10, 0)], 2: [List<num>.filled(10, 0)], 3: [0.0], };
It doesn't always fit. In my case, it worked with :
final output = { 0: [List<num>.filled(10, 0)], 1: [List<List<num>>.filled(10, List<num>.filled(4, 0))], 2: [0.0], 3: [List<num>.filled(10, 0)], };
If we go to Kaggle in Outputs, we will find the right order.
@rickgrotavi hi i tried your code and it returns this error. do you know how to fix this problem?
E/FlutterJNI( 837): android.graphics.ImageDecoder$DecodeException: Failed to create image decoder with message 'unimplemented'Input contained an error.
E/FlutterJNI( 837): at android.graphics.ImageDecoder.nCreate(Native Method)
E/FlutterJNI( 837): at android.graphics.ImageDecoder.-$$Nest$smnCreate(Unknown Source:0)
E/FlutterJNI( 837): at android.graphics.ImageDecoder$ByteBufferSource.createImageDecoder(ImageDecoder.java:242)
E/FlutterJNI( 837): at android.graphics.ImageDecoder.decodeBitmapImpl(ImageDecoder.java:2015)
E/FlutterJNI( 837): at android.graphics.ImageDecoder.decodeBitmap(ImageDecoder.java:2008)
E/FlutterJNI( 837): at io.flutter.embedding.engine.FlutterJNI.decodeImage(FlutterJNI.java:558)
D/BLASTBufferQueue( 837): [SurfaceView[com.example.object_detection_ssd_mobilenet/com.example.object_detection_ssd_mobilenet.MainActivity]@0#5](f:0,a:0) onFrameAvailable the first frame is available```
I have trained a custom model on SSD MobileNet V2 FPNLite 320 x 320. This is the error I'm getting:
Invalid argument(s): Output object shape mismatch, interpreter returned output of shape: [1, 10] while shape of output provided as argument in run is: [1, 10, 4] E/flutter (27268): #0 Tensor._duplicateList (package:tflite_flutter/src/tensor.dart:232:7) E/flutter (27268): #1 Tensor.copyTo (package:tflite_flutter/src/tensor.dart:202:7) E/flutter (27268): #2 Interpreter.runForMultipleInputs (package:tflite_flutter/src/interpreter.dart:183:24) E/flutter (27268): #3 _DetectorServer._runInference (package:live_object_detection_ssd_mobilenet/service/detector_service.dart:363:19) E/flutter (27268): #4 _DetectorServer.analyseImage (package:live_object_detection_ssd_mobilenet/service/detector_service.dart:284:20) E/flutter (27268): #5 _DetectorServer._convertCameraImage.<anonymous closure> (package:live_object_detection_ssd_mobilenet/service/detector_service.dart:246:25) E/flutter (27268): <asynchronous suspension>