tensorflow / tensorflow

An Open Source Machine Learning Framework for Everyone
https://tensorflow.org
Apache License 2.0
186.03k stars 74.25k forks source link

I got problem when I create Interpreter using TF lite #53091

Closed happyEmart closed 2 years ago

happyEmart commented 2 years ago

Please make sure that this is an issue related to performance of TensorFlow. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:performance_template

System information

You can collect some of this information using our environment capture script You can also obtain the TensorFlow version with:

  1. TF 1.0: python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"
  2. TF 2.0: python -c "import tensorflow as tf; print(tf.version.GIT_VERSION, tf.version.VERSION)"

Describe the current behavior Hello, I'm trying to make an app using GPU while training with below example code. https://github.com/tensorflow/examples/tree/master/lite/examples/model_personalization

please refer the code above, with GPUdelegate(https://www.tensorflow.org/lite/performance/gpu_advanced), I added "implementation 'org.tensorflow:tensorflow-lite-gpu:0.0.0-nightly-SNAPSHOT'" in build.gradle of trasfer_api implementation 'org.tensorflow:tensorflow-lite:0.0.0-nightly-SNAPSHOT' implementation 'org.tensorflow:tensorflow-lite-gpu:0.0.0-nightly-SNAPSHOT' // This dependency adds the necessary TF op support. implementation 'org.tensorflow:tensorflow-lite-select-tf-ops:0.0.0-nightly-SNAPSHOT'

plus I changed LiteMultipleSignatureModel as LiteMultipleSignatureModel(ByteBuffer tfLiteModel, int numClasses) { // Initialize interpreter with GPU delegate Interpreter.Options options = new Interpreter.Options(); CompatibilityList compatList = new CompatibilityList();

if(compatList.isDelegateSupportedOnThisDevice()){
  // if the device has a supported GPU, add the GPU delegate
  GpuDelegate.Options delegateOptions = compatList.getBestOptionsForThisDevice();
  GpuDelegate gpuDelegate = new GpuDelegate(delegateOptions);
  options.addDelegate(gpuDelegate);
} else {
  // if the GPU is not supported, run on 4 threads
  options.setNumThreads(4);
}

this.interpreter = new Interpreter(tfLiteModel, options);
this.numClasses = numClasses;

}

I tried to start the app and I found this device support GPU delegation but it failed with fatal exception as 2021-06-11 09:20:23.894 8460-8460/org.tensorflow.lite.examples.transfer I/tflite: Created TensorFlow Lite delegate for GPU. 2021-06-11 09:20:23.899 8460-8460/org.tensorflow.lite.examples.transfer I/tflite: Initialized TensorFlow Lite runtime. 2021-06-11 09:20:23.969 8460-8460/org.tensorflow.lite.examples.transfer I/tflite: Created TensorFlow Lite delegate for select TF ops. 2021-06-11 09:20:23.972 8460-8460/org.tensorflow.lite.examples.transfer I/tflite: TfLiteFlexDelegate delegate: 0 nodes delegated out of 79 nodes with 0 partitions. 2021-06-11 09:20:23.972 8460-8460/org.tensorflow.lite.examples.transfer I/tflite: Replacing 2 node(s) with delegate (TfLiteFlexDelegate) node, yielding 3 partitions. 2021-06-11 09:20:23.990 8460-8460/org.tensorflow.lite.examples.transfer I/tflite: Replacing 2 node(s) with delegate (TfLiteFlexDelegate) node, yielding 3 partitions. 2021-06-11 09:20:23.990 8460-8460/org.tensorflow.lite.examples.transfer I/tflite: Replacing 1 node(s) with delegate (TfLiteFlexDelegate) node, yielding 2 partitions. 2021-06-11 09:20:23.990 8460-8460/org.tensorflow.lite.examples.transfer I/tflite: Replacing 4 node(s) with delegate (TfLiteFlexDelegate) node, yielding 3 partitions. 2021-06-11 09:20:23.991 8460-8460/org.tensorflow.lite.examples.transfer D/AndroidRuntime: Shutting down VM 2021-06-11 09:20:23.894 8460-8460/org.tensorflow.lite.examples.transfer I/tflite: Created TensorFlow Lite delegate for GPU. 2021-06-11 09:20:23.899 8460-8460/org.tensorflow.lite.examples.transfer I/tflite: Initialized TensorFlow Lite runtime. 2021-06-11 09:20:23.969 8460-8460/org.tensorflow.lite.examples.transfer I/tflite: Created TensorFlow Lite delegate for select TF ops. 2021-06-11 09:20:23.972 8460-8460/org.tensorflow.lite.examples.transfer I/tflite: TfLiteFlexDelegate delegate: 0 nodes delegated out of 79 nodes with 0 partitions. 2021-06-11 09:20:23.972 8460-8460/org.tensorflow.lite.examples.transfer I/tflite: Replacing 2 node(s) with delegate (TfLiteFlexDelegate) node, yielding 3 partitions. 2021-06-11 09:20:23.990 8460-8460/org.tensorflow.lite.examples.transfer I/tflite: Replacing 2 node(s) with delegate (TfLiteFlexDelegate) node, yielding 3 partitions. 2021-06-11 09:20:23.990 8460-8460/org.tensorflow.lite.examples.transfer I/tflite: Replacing 1 node(s) with delegate (TfLiteFlexDelegate) node, yielding 2 partitions. 2021-06-11 09:20:23.990 8460-8460/org.tensorflow.lite.examples.transfer I/tflite: Replacing 4 node(s) with delegate (TfLiteFlexDelegate) node, yielding 3 partitions. 2021-06-11 09:20:23.991 8460-8460/org.tensorflow.lite.examples.transfer D/AndroidRuntime: Shutting down VM 2021-06-11 09:20:23.992 8460-8460/org.tensorflow.lite.examples.transfer E/AndroidRuntime: FATAL EXCEPTION: main Process: org.tensorflow.lite.examples.transfer, PID: 8460 java.lang.IllegalArgumentException: Internal error: Error applying delegate: at org.tensorflow.lite.NativeInterpreterWrapper.createInterpreter(Native Method) at org.tensorflow.lite.NativeInterpreterWrapper.init(NativeInterpreterWrapper.java:93) at org.tensorflow.lite.NativeInterpreterWrapper.(NativeInterpreterWrapper.java:66) at org.tensorflow.lite.NativeInterpreterWrapperExperimental.(NativeInterpreterWrapperExperimental.java:44) at org.tensorflow.lite.Interpreter.(Interpreter.java:226) at org.tensorflow.lite.examples.transfer.api.LiteMultipleSignatureModel.(LiteMultipleSignatureModel.java:63) at org.tensorflow.lite.examples.transfer.api.TransferLearningModel.(TransferLearningModel.java:114) at org.tensorflow.lite.examples.transfer.TransferLearningModelWrapper.(TransferLearningModelWrapper.java:46) at org.tensorflow.lite.examples.transfer.CameraFragment.onCreate(CameraFragment.java:332)

at the first, this app started to create gpu delegate but it worked as TfLiteFlexDelegate. I guess this is not proper.

I also tried tfLiteModel to set nitivOrder like tfLiteModel.order(ByteOrder.nativeOrder() but got same result.

please help me out to find the way to resolve this problem :)

Thank you

Describe the expected behavior

Standalone code to reproduce the issue Provide a reproducible test case that is the bare minimum necessary to generate the problem. If possible, please share a link to Colab/Jupyter/any notebook.

Other info / logs Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.

srjoglekar246 commented 2 years ago

@haozha111 FYI

We do not currently support GPU acceleration on the models generated for personalization/on-device training. These graphs have some advanced data structures & properties that the mobile GPU backend can't currently handle :-). We are internally investigating ways to solve this in the coming months.