Closed martis-chromium closed 5 years ago
Thank you for your post. We noticed you have not filled out the following field in the issue template. Could you update them if they are relevant in your case, or leave them as N/A? Thanks. Have I written custom code OS Platform and Distribution TensorFlow installed from TensorFlow version Bazel version CUDA/cuDNN version GPU model and memory Exact command to reproduce
These fields are not applicable, as this is a feature request. I might have used the wrong template.
Making the shared library target is easy, but we probably want to be careful about which symbols should be exposed.
Hi @martis-chromium, we're actively working to publish a fully functional, pure C API, and as part of this will support a proper shared library target and various language bindings for that API. We don't have an exact ETA, but expect some related activity in the coming weeks.
Ok, good to know - thank you.
In the meantime, I'd like to (locally) produce a TF lite shared library and limit its size somewhat.
Is using a linker version script to only expose symbols containing the substring "tflite" an acceptable (broad-stroke) solution?
I am able to compile and run the demos using this approach, but I wanted to double-check that I'm not suppressing some symbols I'll need later.
You can certainly try that, though you may get mixed results. One example of a stripped binary for TFLite is the JNI library @ https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/java/BUILD#L172. That build rule is even more aggressive about stripping.
A preview of what the shared library looks like is the new :litbensorflowlite_c.so target. We'll be adding an analogous target with C++ bindings in the near future.
@martis-chromium how do you build the demo? I am having exactly the same problem as #18060
Are you waiting on a response from me? I'm not really in a position to help @tensorbuffer with their problem.
Despite statements from @shashishekhar like "Making the shared library target is easy", noone has actually specified how to build a shared library. Googling around returns different results. Could someone please clarify this?
@Nimitz14 here's how I did it after reading a couple of issues around here
diff --git a/tensorflow/lite/BUILD b/tensorflow/lite/BUILD
index be84fc5db1..fadf4069e7 100644
--- a/tensorflow/lite/BUILD
+++ b/tensorflow/lite/BUILD
@@ -342,6 +342,21 @@ cc_test(
],
)
+cc_binary(
+ name = "libtflite.so",
+ deps = [
+ "//tensorflow/lite/kernels:builtin_ops",
+ "//tensorflow/lite:builtin_op_data",
+ "//tensorflow/lite:framework",
+ "//tensorflow/lite:schema_fbs_version",
+ "//tensorflow/lite:string",
+ "//tensorflow/lite:string_util",
+ "//tensorflow/lite/schema:schema_fbs",
+
+ ],
+ linkshared=1
+)
+
# Test the serialization of a model with optional tensors.
# Model tests
You can certainly get away with less entries in deps
.
This issue along with #22643 make the development on Android a big headache.
Here's mine:
diff --git a/tensorflow/lite/BUILD b/tensorflow/lite/BUILD
index f8bb719..7fe6227 100644
--- a/tensorflow/lite/BUILD
+++ b/tensorflow/lite/BUILD
@@ -340,6 +340,20 @@ cc_test(
],
)
+cc_binary(
+ name = "libtensorflowlite.so",
+ linkopts=[
+ "-shared",
+ "-Wl,-soname=libtensorflowlite.so",
+ ],
+ linkshared = 1,
+ copts = tflite_copts(),
+ deps = [
+ ":framework",
+ "//tensorflow/lite/kernels:builtin_ops",
+ ],
+)
+
Hi, TF developers I also find that it will be super convenient if there is a bazel target for C library. Looking forward to that! @Nimitz14, for now I find a makefile under tensorflow/lite/tools/make. And if we make it under tensorflow root path, like: make -f tensorflow/lite/tools/make/Makefile TARGET=android TARGET_ARCH=armv7 TARGET_TOOLCHAIN_PREFIX=/ndk_standalone_tc/p19/arm/bin/arm-linux-androideabi- all -j6. It should generate a static library under tensorflow/lite/tools/make/gen/android_armv7/lib/ named libtensorflow-lite.a . Thanks again :+1:
@martis-chromium @Nimitz14 I have build the .so and used it in android project. But the .so size is 3.7M and it is so big with it is reported with hundreds of KB , can you help me?
Hey all, a new shared library target that has both C and C++ bindings is now available.
You can build this (for Android) as follows:
bazel build //tensorflow/lite:libtensorflowlite.so \
--config=android_arm --cxxopt='--std=c++11' -c opt
This should have a smaller binary size as it strips most unnecessary symbols. Sadly, bazel
doesn't have very good support for generating monolithic static (.a) libraries unless you use a custom build rule that manually assembles the objects via ar
.
bazel build //tensorflow/lite:libtensorflow.so \ --config=android_arm --cxxopt='--std=c++11' -c opt
bazel build //tensorflow/lite:libtensorflowlite.so \ --config=android_arm --cxxopt='--std=c++11' -c opt
Ah, good catch, updated!
bazel build //tensorflow/lite:libtensorflowlite.so --config=android_arm --cxxopt='--std=c++11' -c opt
What does it take to make this work for the x86/x86_64 configurations? --config=android_x86 is rejected as invalid by bazel.
bazel build //tensorflow/lite:libtensorflowlite.so --config=android_arm --cxxopt='--std=c++11' -c opt
What does it take to make this work for the x86/x86_64 configurations? --config=android_x86 is rejected as invalid by bazel.
You can use bazel build //tensorflow/lite:libtensorflowlite.so --config=android --cpu=x86 --cxxopt='--std=c++11' -c opt
The size of these shared libraries is ~50MB. Contrary to what @jdduke talked about here, saying it would 1-1.5MB. Is there something extra one has to do for that to happen?
@RuABraun can you provide the exact build command you used to generate the library? Using bazel build //tensorflow/lite:libtensorflowlite.so --config=android_arm64 --cxxopt='--std=c++11' -c opt
I'm getting < 2MB.
I'm using the same. Afterwards:
~/git/tensorflow-android$ ll -h bazel-out/arm64-v8a-opt/bin/tensorflow/lite/libtensorflowLite.so
-r-xr-xr-x 1 rab rab 53M May 28 14:51 bazel-out/arm64-v8a-opt/bin/tensorflow/lite/libtensorflowLite.so*
During configure
I said no to everything except the android stuff, I'm not using clang. This is on the latest commit of master (last commit from 5 hours ago). Struggling to see where things could have diverged (unless I should be using clang?).
EDIT: nevermind! much smaller now. dunno what happened. Thank you for making me rerun it :D
EDIT2: Only difference I can spot now is that I was specifying cpu like --cpu=armeabi-v8a
.
oh and I was target lite with a capital L
Hey all, a new shared library target that has both C and C++ bindings is now available.
You can build this (for Android) as follows:
bazel build //tensorflow/lite:libtensorflowlite.so \ --config=android_arm --cxxopt='--std=c++11' -c opt
This should have a smaller binary size as it strips most unnecessary symbols. Sadly,
bazel
doesn't have very good support for generating monolithic static (.a) libraries unless you use a custom build rule that manually assembles the objects viaar
.
If it helps, here is a shell script that will compile the android distribution, and then make static libraries from the .a
files in the build.
#!/bin/sh
# Script to make a tensorflowlite static library from the generated archives
# Run it from your local tensorflow directory
configs=(android_arm android_arm64)
for config in "${configs[@]}"
do
bazel build --cxxopt='--std=c++11' -c opt --config=$config //tensorflow/lite/java:tensorflow-lite
done
# See: How to build tensorflowlite as a shared library
# https://github.com/tensorflow/tensorflow/issues/20905#issuecomment-468756051
# See: How to combine libraries using ELF AR
# https://stackoverflow.com/a/23621751/45114
# Step 1. Edit this to point to an 'ar' command in your Android toolchain that can
# handle the output archive files. I'm running on a mac so I use darwin
AR="${ARG_NDK_PATH}/toolchains/x86_64-4.9/prebuilt/darwin-x86_64/x86_64-linux-android/bin/ar"
# This is a function that outputs an "MRI" script as commands to the 'ar'
# command that can build the resulting library
write_mri() {
# See: How to combine libraries using ELF AR
# https://stackoverflow.com/a/23621751/45114
echo "create ${result}"
for line in $(find $dir -type f -name \*.a)
do
echo "addlib ${line}"
done
echo "save"
echo "end"
}
for dir in bazel-out/android-*
do
mkdir -p "${dir}/lib"
result="${dir}/lib/libtensorflowlite.a"
rm -f $result
write_mri | $AR -M
echo "made: ${result}"
done
@michaeljbishop this very helpful however when I try to link against the generated libraries I the following errors:
/Users/xxx/Library/Android/sdk/ndk-bundle/toolchains/llvm/prebuilt/darwin-x86_64/lib/gcc/arm-linux-androideabi/4.9.x/../../../../arm-linux-androideabi/bin/ld: error: /Users/xxx/Github/tensorflow/tensorflow/lite/tools/make/gen/ANDROID_arm64/lib/libtensorflow-lite.a: no archive symbol table (run ranlib)
Any ideas?
@jdduke I am on Ubuntu 16.04.6 LTS
I launched docker with: 'sudo docker run -it --rm tensorflow/tensorflow:devel bash'
Then './configure' in tensorflow_src (with defaults)
And then use the command: bazel build //tensorflow/lite:libtensorflowlite.so \ --config=android_arm --cxxopt='--std=c++11' -c opt
But I get the error: ERROR: /tensorflow_src/tensorflow/lite/kernels/internal/BUILD:615:1: C++ compilation of rule '//tensorflow/lite/kernels/internal:audio_utils' failed (Exit 1) Target //tensorflow/lite:libtensorflowlite.so failed to build
Furthermore when I rerun it I get: ERROR: /tensorflow_src/tensorflow/lite/experimental/resource_variable/BUILD:6:1: C++ compilation of rule '//tensorflow/lite/experimental/resource_variable:resource_variable' failed (Exit 1) Target //tensorflow/lite:libtensorflowlite.so failed to build
And I will get different fails almost arbitrarily.
@amitDaMan when you say you used the defaults, did you at least point it to your Android SDK/NDK install location? That is a required part of the configure script if you're building with --config=android_arm
.
@jdduke That solved it, thank you for your speed-y assistance
Hi @jdduke, I followed your instruction and successfully built //tensorflow/lite:libtensorflowlite.so with size < 2M. However, when I link it with my executable and run on android, my tflite model loaded fine but I got "segmentation fault" when executing builder(&interpreter). I am using tensorflow 1.15.0 and flatbuffer 1.10. Any ideas ?
@zhaojiaTech were you using the same compiler and STL version for both the libtensorflowlite.so library and your own C++ code? If not, that might cause issues, and you may be better off using the C API which avoids these kinds of ABI incompatibilities.
I should note, we're hoping to introduce a helper build script which will also package all the necessary C++/C headers to make the shared libraries easier to use.
@jdduke hey. I am trying to run the c++ lib on iOS. Not using the C API/Swift/ObjC. Do you have any tips on how to do this? Building the c++ lib and linking it seems straightforward. But I am wondering how to setup the c++ lib to use the metal delegate?
@scm-ns this thread is quite long already, can you file a separate bug if you're having trouble with iOS + C++ + Metal?
hi, what if i what to build a lib for arm v6 linux?
Hi @jdduke , I followed your instruction and successfully built //tensorflow/lite:libtensorflowlite.so with size < 2M. However, when I run on android with 64-bit Architecture, my tflite model is allegendly terminated succesfully but with 0 [ms] runtimes and no output. When I used a different platform with 32-bit ARM based architecture everything works fine. I am uses tensorflow 1.14.0 and flatbuffer 2.0. Any idea what can be the problem?
How did you build the 64-bit library? With --config=android_arm64
?
@michaeljbishop Thanks for the script building a static build of tflite. However I keep getting undefined reference error, it seems that the static build does not include all needed implementations?
CMakeFiles/bokehapi.dir/src/main/cpp/BokehAPI.cpp.o: In function `std::__ndk1::default_delete<tflite::Interpreter>::operator()(tflite::Interpreter*) const':
/home/andrey/Android/Sdk/ndk/android-ndk-r17c/sources/cxx-stl/llvm-libc++/include/memory:2233: undefined reference to `tflite::Interpreter::~Interpreter()'
CMakeFiles/bokehapi.dir/src/main/cpp/BokehAPI.cpp.o: In function `std::__ndk1::default_delete<tflite::FlatBufferModel>::operator()(tflite::FlatBufferModel*) const':
/home/andrey/Android/Sdk/ndk/android-ndk-r17c/sources/cxx-stl/llvm-libc++/include/memory:2233: undefined reference to `tflite::FlatBufferModel::~FlatBufferModel()'
CMakeFiles/bokehapi.dir/src/main/cpp/BokehAPI.cpp.o: In function `BokehAPI::loadModel(std::__ndk1::basic_string<char, std::__ndk1::char_traits<char>, std::__ndk1::allocator<char> > const&)':
/home/andrey/data/bokeh/code/bokeh_android_tmp/app/src/main/cpp/BokehAPI.cpp:22: undefined reference to `tflite::DefaultErrorReporter()'
/home/andrey/data/bokeh/code/bokeh_android_tmp/app/src/main/cpp/BokehAPI.cpp:22: undefined reference to `tflite::FlatBufferModel::BuildFromFile(char const*, tflite::ErrorReporter*)'
CMakeFiles/bokehapi.dir/src/main/cpp/BokehAPI.cpp.o: In function `std::__ndk1::default_delete<tflite::FlatBufferModel>::operator()(tflite::FlatBufferModel*) const':
/home/andrey/Android/Sdk/ndk/android-ndk-r17c/sources/cxx-stl/llvm-libc++/include/memory:2233: undefined reference to `tflite::FlatBufferModel::~FlatBufferModel()'
/home/andrey/Android/Sdk/ndk/android-ndk-r17c/sources/cxx-stl/llvm-libc++/include/memory:2233: undefined reference to `tflite::FlatBufferModel::~FlatBufferModel()'
CMakeFiles/bokehapi.dir/src/main/cpp/BokehAPI.cpp.o: In function `BokehAPI::loadModel(std::__ndk1::basic_string<char, std::__ndk1::char_traits<char>, std::__ndk1::allocator<char> > const&)':
/home/andrey/data/bokeh/code/bokeh_android_tmp/app/src/main/cpp/BokehAPI.cpp:31: undefined reference to `tflite::ops::builtin::BuiltinOpResolver::BuiltinOpResolver()'
/home/andrey/data/bokeh/code/bokeh_android_tmp/app/src/main/cpp/BokehAPI.cpp:32: undefined reference to `tflite::InterpreterBuilder::InterpreterBuilder(tflite::FlatBufferModel const&, tflite::OpResolver const&)'
/home/andrey/data/bokeh/code/bokeh_android_tmp/app/src/main/cpp/BokehAPI.cpp:32: undefined reference to `tflite::InterpreterBuilder::operator()(std::__ndk1::unique_ptr<tflite::Interpreter, std::__ndk1::default_delete<tflite::Interpreter> >*)'
/home/andrey/data/bokeh/code/bokeh_android_tmp/app/src/main/cpp/BokehAPI.cpp:32: undefined reference to `tflite::InterpreterBuilder::~InterpreterBuilder()'
/home/andrey/data/bokeh/code/bokeh_android_tmp/app/src/main/cpp/BokehAPI.cpp:32: undefined reference to `tflite::InterpreterBuilder::~InterpreterBuilder()'
CMakeFiles/bokehapi.dir/src/main/cpp/BokehAPI.cpp.o: In function `BokehAPI::runBokeh(unsigned char const*, int*)':
/home/andrey/data/bokeh/code/bokeh_android_tmp/app/src/main/cpp/BokehAPI.cpp:36: undefined reference to `tflite::Interpreter::AllocateTensors()'
/home/andrey/data/bokeh/code/bokeh_android_tmp/app/src/main/cpp/BokehAPI.cpp:61: undefined reference to `tflite::Interpreter::Invoke()'
CMakeFiles/bokehapi.dir/src/main/cpp/BokehAPI.cpp.o: In function `~MutableOpResolver':
/home/andrey/data/bokeh/code/bokeh_android_tmp/app/.cxx/cmake/debug/arm64-v8a/../../../../src/main/cpp/include/tensorflow/lite/mutable_op_resolver.h:57: undefined reference to `vtable for tflite::MutableOpResolver'
/home/andrey/data/bokeh/code/bokeh_android_tmp/app/.cxx/cmake/debug/arm64-v8a/../../../../src/main/cpp/include/tensorflow/lite/mutable_op_resolver.h:57: undefined reference to `vtable for tflite::MutableOpResolver'
On the other hand, I was able to get shared library of TFLite works. However, we still need to be able to build the static version of TFLite.
@andreydung can you talk more about your static lib needs? Is there a reason the shared library is insufficient (assuming this is for Android)?
@jdduke I would assume that the need for a static library here is the same basic reason for needing any static library: speed and size optimization. I only want what I absolutely need and I don't want the overhead of dynamic lookups.
I would assume that the need for a static library here is the same basic reason for needing any static library: speed and size optimization. I only want what I absolutely need and I don't want the overhead of dynamic lookups.
The overhead of calls to the shared library should be minimal relative to the cost of running inference (i.e., any marginal overhead in non-inlined calls will be dominated by the cost of executing the model), so I don't think you'd see any observable performance impact between static or shared library usage of TensorFlow Lite (unless you were to use different compiler optimization flags in your library vs the shared library).
I would assume that the need for a static library here is the same basic reason for needing any static library: speed and size optimization. I only want what I absolutely need and I don't want the overhead of dynamic lookups.
The overhead of calls to the shared library should be minimal relative to the cost of running inference (i.e., any marginal overhead in non-inlined calls will be dominated by the cost of executing the model), so I don't think you'd see any observable performance impact between static or shared library usage of TensorFlow Lite (unless you were to use different compiler optimization flags in your library vs the shared library).
I appreciate that the cost is negligible compared to the cost of inference. My point is that it's an unnecessary cost regardless.
@jdduke I would assume that the need for a static library here is the same basic reason for needing any static library: speed and size optimization. I only want what I absolutely need and I don't want the overhead of dynamic lookups.
Another use case is creating one's own shared library which embeds tensowflow inside.
Anyone kowns how to specific it in cmake? Reduce the libtensorflow-lite.a size?
Describe the problem
Would the TF project be open to supporting a TF lite shared library target (as is done already with
libtensorflow.so
andlibtensorflow_cc.so
)?I believe many people would benefit from this, based on recent related issues: https://github.com/tensorflow/tensorflow/issues/18060 https://github.com/tensorflow/tensorflow/issues/17826 https://github.com/tensorflow/tensorflow/issues/16219
I am happy to do the upfront work, but I would require assistance from TF devs for e.g. correct build configuration and on-going support.
Source code / logs
We could add a
tf_cc_shared_object
target totensorflow/contrib/lite/BUILD
:We could also add a header-only target to define the headers for use with
libtensorflow_lite.so
(although this won't be useful until issue 5192 has been resolved).As mentioned above, I'd need some input to decide the correct build config (e.g.
linkopts
, use offramework_so
).