Please make sure that this is a build/installation issue. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:build_template
System information
OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Win10 pro Visual studio 2015
Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: no
TensorFlow installed from (source or binary): trying to build from source
TensorFlow version: gpu-1.13.0
Python version: 3.6.7
Installed using virtualenv? pip? conda?: no
Bazel version (if compiling from source): 0.21.0
GCC/Compiler version (if compiling from source):
CUDA/cuDNN version: CUDA 10.0/ cudnn 7.4.2
GPU model and memory: Geforce 1080Ti
Describe the problem
I'm trying to build Tensorflow-gpu lib files(tensorflow.dll and .lib files) with bazel from source code on Win10, but I have encountered serveral problems.
Provide the exact sequence of commands / steps that you executed before running into the problem
After build configure
C:\tensorflow>python configure.py
WARNING: The following rc files are no longer being read, please transfer their contents or import their path into one of the standard rc files:
nul
WARNING: Running Bazel server needs to be killed, because the startup options are different.
WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".
INFO: Invocation ID: 412c8d20-03eb-46e5-bf73-fd54952f1e01
You have bazel 0.21.0 installed.
Please specify the location of python. [Default is C:\Users\Administrator\AppData\Local\Programs\Python\Python36\python.exe]:
Found possible Python library paths:
C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages
Please input the desired Python library path to use. Default is [C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages]
Do you wish to build TensorFlow with XLA JIT support? [y/N]:
No XLA JIT support will be enabled for TensorFlow.
Do you wish to build TensorFlow with ROCm support? [y/N]:
No ROCm support will be enabled for TensorFlow.
Do you wish to build TensorFlow with CUDA support? [y/N]: y
CUDA support will be enabled for TensorFlow.
Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 10.0]:
Please specify the location where CUDA 10.0 toolkit is installed. Refer to README.md for more details. [Default is C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.0]:
Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7]:
Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.0]:
Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 3.5,7.0]:
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is /arch:AVX]:
Would you like to override eigen strong inline for some C++ compilation to reduce the compilation time? [Y/n]:
Eigen strong inline overridden.
Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details.
--config=mkl # Build with MKL support.
--config=monolithic # Config for mostly static monolithic build.
--config=gdr # Build with GDR support.
--config=verbs # Build with libverbs support.
--config=ngraph # Build with Intel nGraph support.
--config=dynamic_kernels # (Experimental) Build kernels into separate shared objects.
Preconfigured Bazel build configs to DISABLE default on features:
--config=noaws # Disable AWS S3 filesystem support.
--config=nogcp # Disable GCP support.
--config=nohdfs # Disable HDFS support.
--config=noignite # Disable Apacha Ignite support.
--config=nokafka # Disable Apache Kafka support.
--config=nonccl # Disable NVIDIA NCCL support.
I ran
C:\tensorflow>bazel build --config=opt --config=cuda --define=no_tensorflow_py_deps=true //tensorflow/tools/lib_package:libtensorflow
Any other info / logs
After a long time of compiling, the build did indeed succeed but the lib file it generates is
libtensorflow.so
I expect the output lib file for windows to be tensorflow.dll and tensorflow.lib but I think this .so file is not useful on Windows, so I'd like to know:
if I can build a tensorflow.dll and tensorflow.lib file on Win10 or not?
and if not, how do I suppose to use this libtensorflow.so on Windows and deal with all the include file?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Please make sure that this is a build/installation issue. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:build_template
System information
OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Win10 pro Visual studio 2015 Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: no TensorFlow installed from (source or binary): trying to build from source TensorFlow version: gpu-1.13.0 Python version: 3.6.7 Installed using virtualenv? pip? conda?: no Bazel version (if compiling from source): 0.21.0 GCC/Compiler version (if compiling from source): CUDA/cuDNN version: CUDA 10.0/ cudnn 7.4.2 GPU model and memory: Geforce 1080Ti Describe the problem I'm trying to build Tensorflow-gpu lib files(tensorflow.dll and .lib files) with bazel from source code on Win10, but I have encountered serveral problems.
Provide the exact sequence of commands / steps that you executed before running into the problem After build configure
C:\tensorflow>python configure.py WARNING: The following rc files are no longer being read, please transfer their contents or import their path into one of the standard rc files: nul WARNING: Running Bazel server needs to be killed, because the startup options are different. WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown". INFO: Invocation ID: 412c8d20-03eb-46e5-bf73-fd54952f1e01 You have bazel 0.21.0 installed. Please specify the location of python. [Default is C:\Users\Administrator\AppData\Local\Programs\Python\Python36\python.exe]:
Found possible Python library paths: C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages Please input the desired Python library path to use. Default is [C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages]
Do you wish to build TensorFlow with XLA JIT support? [y/N]: No XLA JIT support will be enabled for TensorFlow.
Do you wish to build TensorFlow with ROCm support? [y/N]: No ROCm support will be enabled for TensorFlow.
Do you wish to build TensorFlow with CUDA support? [y/N]: y CUDA support will be enabled for TensorFlow.
Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 10.0]:
Please specify the location where CUDA 10.0 toolkit is installed. Refer to README.md for more details. [Default is C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.0]:
Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7]:
Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.0]:
Please specify a list of comma-separated Cuda compute capabilities you want to build with. You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus. Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 3.5,7.0]:
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is /arch:AVX]:
Would you like to override eigen strong inline for some C++ compilation to reduce the compilation time? [Y/n]: Eigen strong inline overridden.
Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details. --config=mkl # Build with MKL support. --config=monolithic # Config for mostly static monolithic build. --config=gdr # Build with GDR support. --config=verbs # Build with libverbs support. --config=ngraph # Build with Intel nGraph support. --config=dynamic_kernels # (Experimental) Build kernels into separate shared objects. Preconfigured Bazel build configs to DISABLE default on features: --config=noaws # Disable AWS S3 filesystem support. --config=nogcp # Disable GCP support. --config=nohdfs # Disable HDFS support. --config=noignite # Disable Apacha Ignite support. --config=nokafka # Disable Apache Kafka support. --config=nonccl # Disable NVIDIA NCCL support. I ran
C:\tensorflow>bazel build --config=opt --config=cuda --define=no_tensorflow_py_deps=true //tensorflow/tools/lib_package:libtensorflow Any other info / logs After a long time of compiling, the build did indeed succeed but the lib file it generates is libtensorflow.so
I expect the output lib file for windows to be tensorflow.dll and tensorflow.lib but I think this .so file is not useful on Windows, so I'd like to know:
if I can build a tensorflow.dll and tensorflow.lib file on Win10 or not?
and if not, how do I suppose to use this libtensorflow.so on Windows and deal with all the include file?