majidghafouri / Object-Recognition-tf-lite

Object detection model trained using the Tensorflow Object Detection API.
Apache License 2.0
0 stars 1 forks source link

Teachable Machines Output for Object Recognition #1

Closed eneshb closed 4 years ago

eneshb commented 4 years ago

Is it possible to run your object recognition code with tflite model export from googles teachable machines? Image_classification is no problem with the custom model from googles teachable machines, but object_recognition is not working with this custom model. First I had the following exception: java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite buffer with 150528 bytes and a Java Buffer with 146523 bytes. After figuring out i had to set the right input size in private static final int TF_OD_API_INPUT_SIZE, iam now stuck at this exception: java.lang.IllegalArgumentException: Invalid output Tensor index: 1

I already tried with quantized/unquantized models and yes I also tested this with checked/unchecked TF_OD_API_IS_QUANTIZED.

My Model properties: id: sequential_1_input type: uint8[1,224,224,3] quantization: 0.003921568859368563 * q

id: sequential_3/dense_Dense2/Softmax type: uint8[1,3] quantization: 0.00390625 * q

majidghafouri commented 4 years ago

Yes it is possible. The error "Invalid output Tensor index" is related to the size of output. Probably the number of detection is hard-coded to a different number of your model such as 10, but it looks like your model is outputting something different like 1024 or generally your output size is x. So hard-coding the number of detection to x should work.

eneshb commented 4 years ago

Hi, thanks for the reply. Can you tell me what is the output size/number of detections of my model in the screenshot below? [image: image.png]

Am Fr., 15. Nov. 2019 um 13:22 Uhr schrieb Majid Ghafouri < notifications@github.com>:

Yes it is possible. The error "Invalid output Tensor index" is related to the size of output. Probably the number of detection is hard-coded to a different number of your model such as 10, but it looks like your model is outputting something different like 1024 or generally your output size is x. So hard-coding the number of detection to x should work.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/majidghafouri/Object-Recognition-tf-lite/issues/1?email_source=notifications&email_token=ACVISZCOO4EOXSJZOPDF3G3QT2IA7A5CNFSM4JNPH7ZKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEEFI3WQ#issuecomment-554339802, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACVISZAJ633U2PKMIX2XGODQT2IA7ANCNFSM4JNPH7ZA .

majidghafouri commented 4 years ago

Please share your code. I can't see the screenshot. @eneshb