tensorflow / tensorflow

An Open Source Machine Learning Framework for Everyone
https://tensorflow.org
Apache License 2.0
186.27k stars 74.3k forks source link

Auto-Applying XNNPACK Delegate #56571

Closed jishminor closed 1 year ago

jishminor commented 2 years ago
Click to expand! ### Issue Type Support ### Source source ### Tensorflow Version tf 2.5.0 ### Custom Code Yes ### OS Platform and Distribution Linux Ubuntu 20.04 ### Mobile device _No response_ ### Python version _No response_ ### Bazel version _No response_ ### GCC/Compiler version _No response_ ### CUDA/cuDNN version _No response_ ### GPU model and memory _No response_ ### Current Behaviour? Moving from TFLite 2.4.1 to 2.5.0 and compiling with `TFLITE_ENABLE_XNNPACK` using the Cmake build, it seems that in the latter the tflite delegate is automatically applied to the model. If you attempt to apply the xnnpack delegate explicitly in your own usage of the API, you see the error indicating a delegate has already been applied to the graph, making it immutable if another delegate is applied. Is there a way to compile using CMake with xnnpack support such that the xnnpack delegate is not automatically applied starting with tf 2.5.0? I would like to explicitly apply the xnnpack delegate with my own options. ### Standalone code to reproduce the issue ```shell TfLiteXNNPackDelegateOptions options = TfLiteXNNPackDelegateOptionsDefault(); options.num_threads = 4; tflite::Interpreter::TfLiteDelegatePtr xnnpack_delegate( TfLiteXNNPackDelegateCreate(&options), [](TfLiteDelegate* xnnpack_delegate) { TfLiteXNNPackDelegateDelete(xnnpack_delegate); }); // Instruct the Interpreter to use the xnnpack if (interpreter_->ModifyGraphWithDelegate(std::move(xnnpack_delegate)) != kTfLiteOk) { return TRITONSERVER_ErrorNew( TRITONSERVER_ERROR_INTERNAL, ("failed to use xnnpack delegate for model " + Name()).c_str()); } } // Allocate memory for input and output tensors if (interpreter_->AllocateTensors() != kTfLiteOk) { return TRITONSERVER_ErrorNew( TRITONSERVER_ERROR_INTERNAL, ("TfLite interpreter failed to allocate tensor inputs for model " + Name()) .c_str()); } ``` ### Relevant log output ```shell ERROR: ModifyGraphWithDelegate is disallowed when graph is immutable. ERROR: Ignoring failed application of the default TensorFlow Lite delegate indexed at 0. ```
mohantym commented 2 years ago

Hi @jishminor ! You can disable the XNNpack behavior which is enabled by default using the below flag while building your cmake files. TFLITE_ENABLE_XNNPACK = OFF Attached relevant thread for reference. Thank you!

jishminor commented 2 years ago

Thanks for the response.

I was looking for a build option which would include the xnnpack delegate, but not automatically apply it when a model is loaded. I would like to have control in my application logic which determines whether or not to create and apply the xnnpack delegate.

mohantym commented 2 years ago

Hi @sachinprasadhs ! Could you please look at this issue. Thank you!

sachinprasadhs commented 2 years ago

As suggested above, you can build the tflite with TFLITE_ENABLE_XNNPACK = OFF which disables XNNPACK by default, you can enable it in your code using the steps mentioned here.

google-ml-butler[bot] commented 2 years ago

This issue has been automatically marked as stale because it has no recent activity. It will be closed if no further activity occurs. Thank you.

google-ml-butler[bot] commented 2 years ago

Closing as stale. Please reopen if you'd like to work on this further.

google-ml-butler[bot] commented 2 years ago

Are you satisfied with the resolution of your issue? Yes No

jishminor commented 2 years ago

The option TFLITE_ENABLE_XNNPACK = OFF used with the cmake build of tflite v2.5.0 does not include xnnpack in the resultant library for tflite it seems. The tutorial referenced, uses the bazel build of tflite with the bazel build option: //tensorflow/lite:tflite_with_xnnpack. The "with" xnnpack option is not available for the cmake build.

jishminor commented 2 years ago

Is the above correct @sachinprasadhs ?

sachinprasadhs commented 2 years ago

You can have the same options for Cmake also, check the details here https://www.tensorflow.org/lite/guide/build_cmake#available_options_to_build_tensorflow_lite

jishminor commented 2 years ago

The issue is that if I build with the cmake flag TFLITE_ENABLE_XNNPACK = OFF, xnnpack is not included in the build, and application logic will not be able to apply the xnnpack delegate explicitly, which is the behavior i'm looking for.

You can verify this yourself via running the cmake build for tflite and passing TFLITE_ENABLE_XNNPACK = OFF as one of the cmake args. libXNNPACK.a is not built.

jishminor commented 1 year ago

The solution to get the xnnpack delegate to not auto-apply when tflite is built with xnnpack support is to use: tflite::ops::builtin::BuiltinOpResolverWithoutDefaultDelegates as the op resolver. This fixes the problem.

google-ml-butler[bot] commented 1 year ago

Are you satisfied with the resolution of your issue? Yes No