Open Babwenbiber opened 4 years ago
Are the env-var LD_LIBRAR_PATH
and LIBRARY_PATH
correct? They have to contain the libraries of tensorflow (*.so files)
Are the env-var
LD_LIBRAR_PATH
andLIBRARY_PATH
correct? They have to contain the libraries of tensorflow (*.so files)
I tried this, but I get the exact same error message when doing a make.
I did a
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$TENSORFLOW_BUILD_DIR
and
export LIBRARY_PATH=$LIBRARY_PATH:$TENSORFLOW_BUILD_DIR
Minor: Should be export LD_LIBRARY_PATH=$TENSORFLOW_BUILD_DIR:$LD_LIBRARY_PATH
and likewise LIBRARY_PATH
to make sure the tensorflow libraries are the first which are considered.
I cannot reproduce this issue here.
I updated my post and put detailed information about my bazel configuration. Maybe this helps.
I am a bit confused. You built TensorFlow from source (v1.13) and installed tensorflow pip (v1.9). If this is correct, then there is likely the following issue:
Since you have installed tensorflow-gpu
v1.9, python will only see this. But you will need to compile against v1.13 for inference. When compiled TensorFlow from source, you should also be able to compile the pip wheel and install the pip wheel to have a consistent tensorflow version in python and the *.so files.
This would at least explain the linker errors you observed.
Environment
python -c "import tensorflow as tf; sess=tf.InteractiveSession()"
worksIssue
I can't build the inference example for cc. The make command fails with a linkage error.
Context: I cloned this repo and followed the instructions.
Reproduce: 1)
git clone https://github.com/tensorflow/tensorflow/ && cd tensorflow
2)git checkout r1.13
3)./configure
(python2.7, cuda version 9.0) with following options: XLA JIT support: Y OPENCL SYCL support: N ROCm support: N CUDA support: Y (Version 9) cuDNN version 7 TensorRT support: N NCCL version: https://github.com/nvidia/nccl cuda compute capabilities: 6.1,6.1 clang as CUDA compiler: N MPI support: N bazel optimization flags: -march=native -Wno-sign-compare WS for Android: N 4) export TENSORFLOW_SOURCE_DIR and TENSORFLOW_BUILD_DIR 5)mkdir ${TENSORFLOW_BUILD_DIR}
6)cp ${TENSORFLOW_SOURCE_DIR}/bazel-bin/tensorflow/*.so ${TENSORFLOW_BUILD_DIR}/
7) The following command is an instruction of this repo, but will fail, since the subdirectories do not exist yet:cp ${TENSORFLOW_SOURCE_DIR}/bazel-genfiles/tensorflow/cc/ops/*.h ${TENSORFLOW_BUILD_DIR}/includes/tensorflow/cc/ops/
Therefore I did amkdir -p ${TENSORFLOW_BUILD_DIR}/includes/tensorflow/cc/ops/
before. 8)cd inference/cc
9)mkdir build
10)cmake ..
(Tried it withcmake .. -DPYTHON_EXECUTABLE=python
as well) 11)make
Output
Expectation Successful make build.
Investigation I tried several linking flags in the cmake file (-ltensorflow, -tensorflow_cc, -ltensorflow_framework) suggested at https://github.com/tensorflow/tensorflow/issues/14632. I tried this steps described above on two different machines (Both Ubuntu 16.04, one with cuda as shown above, the other without cuda), but both failed with the error shown above. I tried the Custom Operation guide in the tensorflow-cmake repo as well, but this didn't succeed neither.