googlecodelabs / tensorflow-for-poets-2

Apache License 2.0
508 stars 463 forks source link

AutoML TfLite Android Edge Device Tutorial: BufferOverflowException #124

Open tvanfossen opened 5 years ago

tvanfossen commented 5 years ago

I've followed the steps in these tutorials to build a custom edge device model, exported and adapted the code as per the tutorials.

https://cloud.google.com/vision/automl/docs/edge-quickstart https://cloud.google.com/vision/automl/docs/tflite-android-tutorial

The custom model I am trying to bring online from the tflite camera app sample has 100k+ images, 25 labels. App runs fine on the pretrained model that is cloned with the repo.

Sample app code has been adjusted per the tflite-android tutorial above.

Code fails with a BufferOverflowException inside convertBitmapToByteBuffer at : imgData.putFloat((((val >> 16) & 0xFF)-IMAGE_MEAN)/IMAGE_STD); imgData.putFloat((((val >> 8) & 0xFF)-IMAGE_MEAN)/IMAGE_STD); imgData.putFloat((((val) & 0xFF)-IMAGE_MEAN)/IMAGE_STD);

Any suggestions as to why this might be occurring?

rabingaire commented 5 years ago

I have also faced the same issue, I changed the putFloat into put and typecasted the value passed into (byte) imgData.put((byte) ((((val >> 16) & 0xFF)-IMAGE_MEAN)/IMAGE_STD)); App stopped crashing but the output model predicts is very very wrong.

tvanfossen commented 5 years ago

I used this tutorial instead: https://www.tensorflow.org/lite/models/image_classification/android

And it works no problem, just swap the tflite and txt file exported from AutoML into the assets folder again and change the ClassifierQuantizedModel to point to the custom file instead of the pregenerated ones.