Closed kunkun007 closed 3 years ago
System information
Have I written custom code (as opposed to using a stock example script provided in TensorFlow): OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 16.04 Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:HUAWEI armv8 mobile TensorFlow installed from (source or binary): source TensorFlow version (use command below): r1.13 Python version:2.7 Bazel version (if compiling from source):19.0.2 GCC/Compiler version (if compiling from source):4.9 CUDA/cuDNN version:no GPU model and memory:no
Describe the current behavior I compile tensorflow lite arm64-v8a and armeabi-v7a, both success build, when i use them on my android device with my c++ code and own model, armeabi-v7a and get correct result, but arm64-v8a crash when destruct SessionOptions.
Code to reproduce the issue, I want to load my own model but failed, so I just using this line, and the function crash.
std::unique_ptr
Other info / logs I try to rebuild the tensorflow using ndk-r10e, ndk-r11c,ndk-r12b, ndk-r14b, and ndk-r15c, both build success and cannot work.
my bazel command: bazel build //tensorflow/lite:libtensorflowLite.so --crosstool_top=//external:android/crosstool --cpu=arm64-v8a --host_crosstool_top=@bazel_tools//tools/cpp:toolchain --cxxopt="-std=c++11"
I add lines into tensorflow/lite BUILD File: cc_binary( name = "libtensorflowLite.so", linkopts=[ "-shared", "-Wl,-soname=libtensorflowLite.so", ], linkshared = 1, copts = tflite_copts(), deps = [ ":framework", "//tensorflow/lite/kernels:builtin_ops", "//tensorflow//lite/delegates/flex:delegate", ], )
I thick the so is ok because armeabi-v7a can work successfully. Thank you very much to look this issue.
Can you try adding the --config=monolithic
flag to your bazel build
command?
I use .pb file in android, should i change it to .tflite file to using my own model? I will try to rebuild as you recommended.
TensorFlow Lite requires use of .tflite models, generated with the TensorFlow Lite converter. It cannot read or use frozen graphs or graph defs (or saved models).
Hi @kunkun007 I am new to tensorflow. I am building tensorflow for arm64. I hope you cross compiled tensorflow source code. Can you please help how to build tensorflow for arm64.
Hi, @jdduke I build tensorflow lite as you have recommanded, but it still not work, so i change my model to .tflite, it can load the model successfully. Unfortunately, I use mtcnn, pnet must have fiexed input size, otherwise the tf_converter failed, I don't know how to inference the mtcnn then. Should I open a new issue for this problem? Thank you for your reply.
Hi, @thotaram 1You should clone the tensorflow source code use git clone command
2./configure to set android sdk and ndk
3modify tensorflow/lite BUILD File:
cc_binary(
name = "libtensorflowLite.so",
linkopts=[
"-shared",
"-Wl,-soname=libtensorflowLite.so",
],
linkshared = 1,
copts = tflite_copts(),
deps = [
":framework",
"//tensorflow/lite/kernels:builtin_ops",
"//tensorflow//lite/delegates/flex:delegate",
],
)
maybe in low tensorflow version, you should change the lite file /tensorflow/contrib/lite and no such line:"//tensorflow//lite/delegates/flex:delegate".
4 use the bazal build command to build: my bazel command: bazel build //tensorflow/lite:libtensorflowLite.so --crosstool_top=//external:android/crosstool --cpu=arm64-v8a --host_crosstool_top=@bazel_tools//tools/cpp:toolchain --cxxopt="-std=c++11"
You should install bazel first and install a property version.
Hi @kunkun007 Thank you for your inputs. I used the below link to cross build the tensorflow. https://github.com/xifengcun/tensorflow-aarch64-crossbuild Configuration is done successfully, but while running the bazel command getting the below error.
ERROR: /home/aiiec/ARMNN/tensorflow_v1.13.1/tensorflow/tools/aarch64_compiler/BUILD:15:1: //tools/aarch64_compiler:gcc-linux-aarch64: no such attribute 'dynamic_runtime_libs' in 'cc_toolchain' rule ERROR: /home/aiiec/ARMNN/tensorflow_v1.13.1/tensorflow/tools/aarch64_compiler/BUILD:15:1: //tools/aarch64_compiler:gcc-linux-aarch64: no such attribute 'static_runtime_libs' in 'cc_toolchain' rule
I would like to know you have followed any web link to build tensorflow lite successfully. Thanks inadvance.
Hi There,
We are checking to see if you still need help on this issue, as you are using an older version of tensorflow(1.x) which is officially considered as end of life. We recommend that you upgrade to 2.4 or later version and let us know if the issue still persists in newer versions.
This issue will be closed automatically 7 days from now. If you still need help with this issue, Please open a new issue for any help you need against 2.x, and we will get you the right help.
Please make sure that this is a bug. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template
System information
You can collect some of this information using our environment capture script You can also obtain the TensorFlow version with: 1. TF 1.0:
python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"
2. TF 2.0:python -c "import tensorflow as tf; print(tf.version.GIT_VERSION, tf.version.VERSION)"
Describe the current behavior
Describe the expected behavior
Code to reproduce the issue Provide a reproducible test case that is the bare minimum necessary to generate the problem.
Other info / logs Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.