Open lunasdejavu opened 5 years ago
I do not use Windows, that's why it states
Please report if this works
Detecting TensorFlow info
is done via a python call. Please change the line
https://github.com/PatWie/tensorflow-cmake/blob/675970503a1cffbc033d900c4d528d2d6ae73ed6/cmake/modules/FindTensorFlow.cmake#L65
replace PYTHON_EXECUTABLE
with the binary to your python path and please report back if that works or not.
I also wanted to try it on Windows 10 with Visual Studio 2017, however I'm facing a similar issue.
First I built Tensorflow from source to have the C and C++ APIs using the following steps
Install Python 3.7.0 (https://www.python.org/downloads/windows/)
Install the Tensorflow pip packaged dependencies pip3 install six numpy wheel pip3 install keras_applications==1.0.6 --no-deps pip3 install keras_preprocessing==1.0.5 --no-deps
Install MSYS2 shell (http://www.msys2.org/) pacman -Syu pacman -Su
Install Bazel 0.24.1 (https://docs.bazel.build/versions/master/install-windows.html)
Open MSYS2 and run pacman -S git patch unzip
Download Tensorflow source code git clone https://github.com/tensorflow/tensorflow.git
Configure Tensorflow build cd tensorflow python ./configure.py (I left the default settings)
Build the Tensorflow libraries bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package bazel build -c opt ///tensorflow:libtensorflow.so bazel build -c opt ///tensorflow:libtensorflow_cc.so bazel-bin/tensorflow/tools/pip_package/build_pip_package C:/tmp/tensorflow_pkg
Install the Tensorflow package pip3 install /c/tmp/tensorflow_pkg/tensorflow-1.13.1-cp37-cp37m-win_amd64.whl
At this point I also checked Tensorflow Python APIs worked running a small python script creating a Tensor
Then I wanted to try the tensorflow-cmake/inference example, so I first exported the model running
git clone https://github.com/PatWie/tensorflow-cmake
cd tensorflow-cmake/inference
python python/inference.py
However when trying to configure the project with CMake I get this error
cmake . -- Building for: Visual Studio 15 2017 -- The C compiler identification is MSVC 19.14.26433.0 -- The CXX compiler identification is MSVC 19.14.26433.0 -- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/M SVC/14.14.26428/bin/Hostx86/x86/cl.exe -- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/M SVC/14.14.26428/bin/Hostx86/x86/cl.exe -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Detecting C compile features -- Detecting C compile features - done -- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools /MSVC/14.14.26428/bin/Hostx86/x86/cl.exe -- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools /MSVC/14.14.26428/bin/Hostx86/x86/cl.exe -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compile features -- Detecting CXX compile features - done CMake Warning at C:/Users/migue_8d3inso/Development/tensorflow-cmake/cmake/modules/FindTensorFlow.cmake:5 0 (message): This FindTensorflow.cmake is not tested on WIN32 Please report if this works https://github.com/PatWie/tensorflow-cmake Call Stack (most recent call first): CMakeLists.txt:6 (find_package) -- Detecting TensorFlow using python3 (use -DPYTHON_EXECUTABLE=... otherwise) CMake Error at C:/Users/migue_8d3inso/Development/tensorflow-cmake/cmake/modules/FindTensorFlow.cmake:71 (message): Detecting TensorFlow info - failed Did you installed TensorFlow? Call Stack (most recent call first): CMakeLists.txt:6 (find_package) -- Configuring incomplete, errors occurred! See also "C:/Users/migue_8d3inso/Development/tensorflow-cmake/inference/cc/CMakeFiles/CMakeOutput.log".
Do you have some ideas on how to adjust FindTensorflow.cmake to make it work on Windows?
Thanks in advance!
Have you tried
cmake . use -DPYTHON_EXECUTABLE=Path/to/python.exe
Python is required to find the TF library as this might be easier than typing the paths by hand. I probably need a fallback option allowing users to specify the parts manually.
Cmake really tries to run
python -c "import tensorflow as tf; print(tf.__version__); print(tf.__cxx11_abi_flag__); print(tf.sysconfig.get_include()); print(tf.sysconfig.get_lib() + '/libtensorflow_framework.so')"
and DPYTHON_EXECUTABLE
should point to the correct python binary.
@PatWie I tried on Windows 10 with conda virtualenv with
cmake -A x64 . -DPYTHON_EXECUTABLE=D:\app\anaconda\envs\tf110\python
and got
-- Building for: Visual Studio 14 2015
-- Selecting Windows SDK version to target Windows 10.0.17134.
-- The C compiler identification is MSVC 19.0.24215.1
-- The CXX compiler identification is MSVC 19.0.24215.1
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/x86_amd64/cl.exe
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/x86_amd64/cl.exe -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/x86_amd64/cl.exe
-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/x86_amd64/cl.exe -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
CMake Warning at D:/sin/tensorflow-cmake-master/cmake/modules/FindTensorFlow.cmake:50 (message):
This FindTensorflow.cmake is not tested on WIN32
Please report if this works
https://github.com/PatWie/tensorflow-cmake
Call Stack (most recent call first):
CMakeLists.txt:11 (find_package)
-- Detecting TensorFlow using D:\app\anaconda\envs\tf110\python (use -DPYTHON_EXECUTABLE=... otherwise)
-- Detecting TensorFlow info - done
-- Found TensorFlow: (found appropriate version "1.10.0")
-- TensorFlow-ABI is 0
-- TensorFlow-INCLUDE_DIR is D:\app\anaconda\envs\tf110\lib\site-packages\tensorflow\include
-- TensorFlow-LIBRARY is D:\app\anaconda\envs\tf110\lib\site-packages\tensorflow/libtensorflow_framework.so
-- No TensorFlow-CC-LIBRARY detected
-- TensorFlow-SOURCE-DIRECTORY is C:/tensorflow
-- Found TENSORFLOW: D:\app\anaconda\envs\tf110\lib\site-packages\tensorflow/libtensorflow_framework.so (found version "1.10.0")
-- Found CUDA: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0 (found suitable version "9.0", minimum required is "9")
-- will build custom TensorFlow operation "matrix_add" (CPU+GPU)
-- Configuring done
-- Generating done
-- Build files have been written to: D:/sin/tensorflow-cmake-master/custom_op/user_ops
Then I build it with MSBuild /p:Configuration=Release matrix_add_op.vcxproj
But got error message
(CustomBuild object) ->
D:/app/anaconda/envs/tf110/lib/site-packages/tensorflow/include\tensorflow/core/util/cuda_device_functions.h(32):
fatal error C1083: Cannot open include file: 'cuda/include/cuda.h': No such file or directory [D:\sin\tensorflow-cmake-m
aster\custom_op\user_ops\matrix_add_op_cu.vcxproj]
Do you have any idea to solve this problem? Thanks in advance!
CUDA must be installed on your machine. I have to admit I never did CUDA stuff on Windows. Did you followed something like https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html to install CUDA? Did you compile a CUDA program before under Windows?
@PatWie Thanks a lot for your reply!
I actually installed CUDA 9.0, furthermore, as the message displayed above, it found the CUDA path
-- Found CUDA: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0 (found suitable version "9.0", minimum required is "9")
I compiled tensorflow-gpu 1.8 on this machine with cmake long ago.
Please refer to the TensorFlow issue: https://github.com/tensorflow/tensorflow/issues/15002#issuecomment-424232917
You should locate the file:
The file C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0/cuda/include/cuda.h
exists
I already mentioned the exact same issue there. One workaround is to add the line
include_directories(SYSTEM "C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0")
The file include/cuda.h
but is it not the path cuda/include/cuda.h
. Then you need to either make a symbolic link such that
my/symbolic/link/cuda/include/cuda.h
exists and you write
include_directories(SYSTEM "my/symbolic/link")
Or you just copy the missing files.
Please report back, that I can try to hack a workaround for Windows.
@PatWie I also tried using
cd cc
export TENSORFLOW_BUILD_DIR=~/Development/tensorflow-build
export TENSORFLOW_SOURCE_DIR=~/Development/tensorflow
cmake -G "Visual Studio 15 2017 Win64" . -DPYTHON_EXECUTABLE=/c/Users/migue_8d3inso/AppData/Local/Programs/Python/Python37/python
However CMake fails with the following output
$ cmake -G "Visual Studio 15 2017 Win64" . -DPYTHON_EXECUTABLE=/c/Users/migue_8d3inso/AppData/Local/Programs/Python/Python37/p ython -- The C compiler identification is MSVC 19.14.26433.0 -- The CXX compiler identification is MSVC 19.14.26433.0 -- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.14.26428/bin/H ostx86/x64/cl.exe -- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.14.26428/bin/H ostx86/x64/cl.exe -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Detecting C compile features -- Detecting C compile features - done -- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.14.26428/bin /Hostx86/x64/cl.exe -- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.14.26428/bin /Hostx86/x64/cl.exe -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compile features -- Detecting CXX compile features - done CMake Warning at C:/Users/migue_8d3inso/Development/tensorflow-cmake/cmake/modules/FindTensorFlow.cmake:50 (message): This FindTensorflow.cmake is not tested on WIN32 Please report if this works https://github.com/PatWie/tensorflow-cmake Call Stack (most recent call first): CMakeLists.txt:6 (find_package) -- Detecting TensorFlow using C:/Users/migue_8d3inso/AppData/Local/Programs/Python/Python37/python (use -DPYTHON_EXECUTABLE=.. . otherwise) -- Detecting TensorFlow info - done -- Found TensorFlow: (found appropriate version "1.13.1") -- TensorFlow-ABI is 0 -- TensorFlow-INCLUDE_DIR is C:\Users\migue_8d3inso\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\includ e -- TensorFlow-LIBRARY is C:\Users\migue_8d3inso\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow/libtensorf low_framework.so -- No TensorFlow-CC-LIBRARY detected -- TensorFlow-SOURCE-DIRECTORY is C:/Users/migue_8d3inso/Development/tensorflow -- Found TENSORFLOW: C:\Users\migue_8d3inso\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow/libtensorflow_ framework.so (found version "1.13.1") CMake Error at C:/Users/migue_8d3inso/Development/tensorflow-cmake/cmake/modules/FindTensorFlow.cmake:222 (message): Project requires libtensorflow_cc.so, please specify the path in ENV-VAR 'TENSORFLOW_BUILD_DIR' Call Stack (most recent call first): CMakeLists.txt:13 (TensorFlow_REQUIRE_C_LIBRARY) -- Configuring incomplete, errors occurred! See also "C:/Users/migue_8d3inso/Development/tensorflow-cmake/inference/cc/CMakeFiles/CMakeOutput.log".
--
Note that it says it cannot find libtensorflow_cc.so within the TENSORFLOW_BUILD_DIR, however I checked the libraries are there
@PatWie I really appreciate your patience. After solving include problem above, now I get linking errors as below:
matrix_add_op_cu_generated_matrix_add_kernel_gpu.cu.cc.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::internal::LogMessageFatal::LogMessageFatal(char const *,int)" (??0LogMessageFatal@internal@tensorflow@
@QEAA@PEBDH@Z) referenced in function "private: void __cdecl tensorflow::Tensor::FillDimsAndValidateCompatibleShape<1>(class tensorflow::gtl::ArraySlice<__int64>,class Eigen::array<__int64,1> *)const " (??$FillDimsAndValidateCompatibleS
hape@$00@Tensor@tensorflow@@AEBAXV?$ArraySlice@_J@gtl@1@PEAV?$array@_J$00@Eigen@@@Z) [D:\sin\tensorflow-cmake-master\custom_op\user_ops\matrix_add_op_cu.vcxproj]
matrix_add_op_cu_generated_matrix_add_kernel_gpu.cu.cc.obj : error LNK2019: unresolved external symbol "public: virtual __cdecl tensorflow::internal::LogMessageFatal::~LogMessageFatal(void)" (??1LogMessageFatal@internal@tensorflow@@UE
AA@XZ) referenced in function "private: void __cdecl tensorflow::Tensor::FillDimsAndValidateCompatibleShape<1>(class tensorflow::gtl::ArraySlice<__int64>,class Eigen::array<__int64,1> *)const " (??$FillDimsAndValidateCompatibleShape@$00
@Tensor@tensorflow@@AEBAXV?$ArraySlice@_J@gtl@1@PEAV?$array@_J$00@Eigen@@@Z) [D:\sin\tensorflow-cmake-master\custom_op\user_ops\matrix_add_op_cu.vcxproj]
matrix_add_op_cu_generated_matrix_add_kernel_gpu.cu.cc.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::internal::CheckOpMessageBuilder::CheckOpMessageBuilder(char const *)" (??0CheckOpMessageBuilder@intern
al@tensorflow@@QEAA@PEBD@Z) referenced in function "private: void __cdecl tensorflow::Tensor::FillDimsAndValidateCompatibleShape<1>(class tensorflow::gtl::ArraySlice<__int64>,class Eigen::array<__int64,1> *)const " (??$FillDimsAndValida
teCompatibleShape@$00@Tensor@tensorflow@@AEBAXV?$ArraySlice@_J@gtl@1@PEAV?$array@_J$00@Eigen@@@Z) [D:\sin\tensorflow-cmake-master\custom_op\user_ops\matrix_add_op_cu.vcxproj]
matrix_add_op_cu_generated_matrix_add_kernel_gpu.cu.cc.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::internal::CheckOpMessageBuilder::~CheckOpMessageBuilder(void)" (??1CheckOpMessageBuilder@internal@tens
orflow@@QEAA@XZ) referenced in function "private: void __cdecl tensorflow::Tensor::FillDimsAndValidateCompatibleShape<1>(class tensorflow::gtl::ArraySlice<__int64>,class Eigen::array<__int64,1> *)const " (??$FillDimsAndValidateCompatibl
eShape@$00@Tensor@tensorflow@@AEBAXV?$ArraySlice@_J@gtl@1@PEAV?$array@_J$00@Eigen@@@Z) [D:\sin\tensorflow-cmake-master\custom_op\user_ops\matrix_add_op_cu.vcxproj]
matrix_add_op_cu_generated_matrix_add_kernel_gpu.cu.cc.obj : error LNK2019: unresolved external symbol "public: class std::basic_ostream<char,struct std::char_traits<char> > * __cdecl tensorflow::internal::CheckOpMessageBuilder::ForVa
r2(void)" (?ForVar2@CheckOpMessageBuilder@internal@tensorflow@@QEAAPEAV?$basic_ostream@DU?$char_traits@D@std@@@std@@XZ) referenced in function "private: void __cdecl tensorflow::Tensor::FillDimsAndValidateCompatibleShape<1>(class tensor
flow::gtl::ArraySlice<__int64>,class Eigen::array<__int64,1> *)const " (??$FillDimsAndValidateCompatibleShape@$00@Tensor@tensorflow@@AEBAXV?$ArraySlice@_J@gtl@1@PEAV?$array@_J$00@Eigen@@@Z) [D:\sin\tensorflow-cmake-master\custom_op\user
_ops\matrix_add_op_cu.vcxproj]
matrix_add_op_cu_generated_matrix_add_kernel_gpu.cu.cc.obj : error LNK2019: unresolved external symbol "public: class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > * __cdecl tensorflow::internal::Ch
eckOpMessageBuilder::NewString(void)" (?NewString@CheckOpMessageBuilder@internal@tensorflow@@QEAAPEAV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@XZ) referenced in function "private: void __cdecl tensorflow::Tensor::Fill
DimsAndValidateCompatibleShape<1>(class tensorflow::gtl::ArraySlice<__int64>,class Eigen::array<__int64,1> *)const " (??$FillDimsAndValidateCompatibleShape@$00@Tensor@tensorflow@@AEBAXV?$ArraySlice@_J@gtl@1@PEAV?$array@_J$00@Eigen@@@Z)
[D:\sin\tensorflow-cmake-master\custom_op\user_ops\matrix_add_op_cu.vcxproj]
matrix_add_op_cu_generated_matrix_add_kernel_gpu.cu.cc.obj : error LNK2019: unresolved external symbol "char const * __cdecl tensorflow::core::GetVarint32PtrFallback(char const *,char const *,unsigned int *)" (?GetVarint32PtrFallback@
core@tensorflow@@YAPEBDPEBD0PEAI@Z) referenced in function "char const * __cdecl tensorflow::core::GetVarint32Ptr(char const *,char const *,unsigned int *)" (?GetVarint32Ptr@core@tensorflow@@YAPEBDPEBD0PEAI@Z) [D:\sin\tensorflow-cmake-m
aster\custom_op\user_ops\matrix_add_op_cu.vcxproj]
matrix_add_op_cu_generated_matrix_add_kernel_gpu.cu.cc.obj : error LNK2019: unresolved external symbol "class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > __cdecl tensorflow::strings::StrCat(class
tensorflow::strings::AlphaNum const &)" (?StrCat@strings@tensorflow@@YA?AV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@AEBVAlphaNum@12@@Z) referenced in function "class tensorflow::Status __cdecl tensorflow::errors::Inte
rnal<char const *>(char const *)" (??$Internal@PEBD@errors@tensorflow@@YA?AVStatus@1@PEBD@Z) [D:\sin\tensorflow-cmake-master\custom_op\user_ops\matrix_add_op_cu.vcxproj]
matrix_add_op_cu_generated_matrix_add_kernel_gpu.cu.cc.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::Status::Status(enum tensorflow::error::Code,class tensorflow::StringPiece)" (??0Status@tensorflow@@QEA
A@W4Code@error@1@VStringPiece@1@@Z) referenced in function "class tensorflow::Status __cdecl tensorflow::errors::Internal<char const *>(char const *)" (??$Internal@PEBD@errors@tensorflow@@YA?AVStatus@1@PEBD@Z) [D:\sin\tensorflow-cmake-m
aster\custom_op\user_ops\matrix_add_op_cu.vcxproj]
matrix_add_op_cu_generated_matrix_add_kernel_gpu.cu.cc.obj : error LNK2019: unresolved external symbol "private: void __cdecl tensorflow::Tensor::CheckType(enum tensorflow::DataType)const " (?CheckType@Tensor@tensorflow@@AEBAXW4DataTy
pe@2@@Z) referenced in function "public: class Eigen::TensorMap<class Eigen::Tensor<int const ,1,1,__int64>,16,struct Eigen::MakePointer> __cdecl tensorflow::Tensor::shaped<int,1>(class tensorflow::gtl::ArraySlice<__int64>)const " (??$s
haped@H$00@Tensor@tensorflow@@QEBA?AV?$TensorMap@V?$Tensor@$$CBH$00$00_J@Eigen@@$0BA@UMakePointer@2@@Eigen@@V?$ArraySlice@_J@gtl@1@@Z) [D:\sin\tensorflow-cmake-master\custom_op\user_ops\matrix_add_op_cu.vcxproj]
matrix_add_op_cu_generated_matrix_add_kernel_gpu.cu.cc.obj : error LNK2019: unresolved external symbol "private: void __cdecl tensorflow::Tensor::CheckTypeAndIsAligned(enum tensorflow::DataType)const " (?CheckTypeAndIsAligned@Tensor@t
ensorflow@@AEBAXW4DataType@2@@Z) referenced in function "public: class Eigen::TensorMap<class Eigen::Tensor<int,1,1,__int64>,16,struct Eigen::MakePointer> __cdecl tensorflow::Tensor::flat<int>(void)" (??$flat@H@Tensor@tensorflow@@QEAA?A
V?$TensorMap@V?$Tensor@H$00$00_J@Eigen@@$0BA@UMakePointer@2@@Eigen@@XZ) [D:\sin\tensorflow-cmake-master\custom_op\user_ops\matrix_add_op_cu.vcxproj]
matrix_add_op_cu_generated_matrix_add_kernel_gpu.cu.cc.obj : error LNK2019: unresolved external symbol "public: void __cdecl tensorflow::OpKernelContext::SetStatus(class tensorflow::Status const &)" (?SetStatus@OpKernelContext@tensorf
low@@QEAAXAEBVStatus@2@@Z) referenced in function "public: static void __cdecl tensorflow::functor::MatrixAddFunctor<struct Eigen::GpuDevice,int>::launch(class tensorflow::OpKernelContext *,class tensorflow::Tensor const &,class tensorf
low::Tensor const &,class tensorflow::Tensor *,int)" (?launch@?$MatrixAddFunctor@UGpuDevice@Eigen@@H@functor@tensorflow@@SAXPEAVOpKernelContext@3@AEBVTensor@3@1PEAV53@H@Z) [D:\sin\tensorflow-cmake-master\custom_op\user_ops\matrix_add_op
_cu.vcxproj]
D:\sin\tensorflow-cmake-master\custom_op\user_ops\Release\matrix_add_op_cu.dll : fatal error LNK1120: 12 unresolved externals [D:\sin\tensorflow-cmake-master\custom_op\user_ops\matrix_add_op_cu.vcxproj]
I find a similar problem solution here but I don't know how to adapt it in this situation, could you help me with this?
My understanding of using Tensorflow in Visual Studio / Windows is that you need a .dll, and not a .so.
I've been able to use the libtensorflow.dll file from google, in conjunction with the inference.c / cpp examples in this repository successfully on Windows 10 + Visual Studio + CUDA. The steps are a little detailed so I will cover them soon in a different post.
@iamsurya did you finish it your different post?
I haven't. I'm at a conference this week. I'm going to try and summarize broken concepts here and link to them if you're in a hurry to try this. This only for Visual Studio on Windows 10.
Steps: 1) Get libtensorflow pre-compiled DLL. 1a) Install the correct version of CUDA and CUDNN if you're using the gpu version. 2) Create a .lib (this is missing from the zip files), or use the one I created. Example on how to create. Please call the .def file libtensorflow if you're creating on your own. 3) Set up the windows environment / paths so that your visual studio project can find them. Here is an example on how to setup the environment for opencv. 4) Create a VS project that can use the dll / .h / .lib. Example on how to do it for OpenCV 5) Test a simple code that calls TF_Version(); 6) Run PatWie's c example for inference.
I know all you're using is his C example, but I'm unaware of any easier way to run inference.
@iamsurya do you think extending the FindTensorflow.cmake to support windows is doable or at least favorable? Especially step 3 and 4 seems to be the windows way without any CMake involved at all.
It depends. Compiling from scratch has benefits for speed up. You can use specific optimizations for the target computer or gpu. For example, you can use a specific version of the CUDA or CUDNN library. However, the drawback is you have to setup a build environment. This is usually not too complicated on linux. On Windows, its often time consuming due to how DLLs (tensorflow, Cuda, Cudnn, Intel) work. (Some coders prefer compiling for windows using a linux host!).
The benefits of using these optimizations are not worth the effort needed to build from scratch. I'm okay installing the required versions of CUDA and CUDNN and benefiting from the standard speed up a GPU gives, as that's the biggest one, so in my opinion if you're just testing (and not deploying), it might be easier to just use the windows DLLs.
@PatWie looks like it is possible to create a dll using bazel, but it still names it .so (and not .dll). See this tensorflow issue: https://github.com/migueldeicaza/TensorFlowSharp/issues/389#issuecomment-461380299
Do you think you could generate a .dll for me since you have the build system for it? Just a plain gpu vanilla version. If yes, I'll confirm if your cmake system works and we can get back to supporting this.
@lunasdejavu:
@iamsurya did you finish it your different post?
Meanwhile, I've finished the guide on how to use google's libtensorflow pre-compiled dlls and run inference, that relies heavily on the inference code from this repository.
I use anaconda and establish the virtual environment 3.6.3 tensorflow 1.12 with GPU
can you tell me where to set all the paths?