tensorflow / tensorflow

An Open Source Machine Learning Framework for Everyone
https://tensorflow.org
Apache License 2.0
185.9k stars 74.24k forks source link

MKL Error on Bazel 2.0 (TensorFlow 2.1 latest - nightly) #37430

Closed Expert73 closed 4 years ago

Expert73 commented 4 years ago

System information

Describe the problem At the initial stage of the build, an error appears.

C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.24.28314\include\xtr1common(163): note: see reference to class template instantiation 'std::integral_constant<bool,false>' being compiled C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.24.28314\include\xtr1common(163): note: see reference to class template instantiation 'std::disjunction<_Traits...>' being compiled ERROR: C:/tensorflow/tensorflow/core/kernels/BUILD:8128:1: C++ compilation of rule '//tensorflow/core/kernels:mkl_aggregate_ops' failed (Exit 2) .\tensorflow/core/util/mkl_util.h(1253): error C2131: expression did not evaluate to a constant .\tensorflow/core/util/mkl_util.h(1252): note: failure was caused by a read of a variable outside its lifetime .\tensorflow/core/util/mkl_util.h(1252): note: see usage of 'dim' .\tensorflow/core/util/mkl_util.h(1254): error C2131: expression did not evaluate to a constant .\tensorflow/core/util/mkl_util.h(1252): note: failure was caused by a read of a variable outside its lifetime .\tensorflow/core/util/mkl_util.h(1252): note: see usage of 'dim' .\tensorflow/core/util/mkl_util.h(1256): error C3863: array type 'dnnl_dim_t [kNumDims]' is not assignable .\tensorflow/core/util/mkl_util.h(1257): error C3863: array type 'dnnl_dim_t [kNumDims]' is not assignable Target //tensorflow/tools/pip_package:build_pip_package failed to build INFO: Elapsed time: 2448.446s, Critical Path: 179.48s INFO: 4686 processes: 4686 local. FAILED: Build did NOT complete successfully

Expert73 commented 4 years ago

build cmd bazel --output_base=c:/bazel/output_dir/ build --config=mkl --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package

ravikyram commented 4 years ago

@Expert73

Can you please provide us the error log. Thanks!

Expert73 commented 4 years ago

Tensorflow 2.1 latest (nightly) C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.24.28314\include\xtr1common(163): note: see reference to class template instantiation 'std::integral_constant<bool,false>' being compiled C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.24.28314\include\xtr1common(163): note: see reference to class template instantiation 'std::disjunction<_Traits...>' being compiled ERROR: C:/tensorflow/tensorflow/core/kernels/BUILD:8128:1: C++ compilation of rule '//tensorflow/core/kernels:mkl_aggregate_ops' failed (Exit 2) .\tensorflow/core/util/mkl_util.h(1253): error C2131: expression did not evaluate to a constant .\tensorflow/core/util/mkl_util.h(1252): note: failure was caused by a read of a variable outside its lifetime .\tensorflow/core/util/mkl_util.h(1252): note: see usage of 'dim' .\tensorflow/core/util/mkl_util.h(1254): error C2131: expression did not evaluate to a constant .\tensorflow/core/util/mkl_util.h(1252): note: failure was caused by a read of a variable outside its lifetime .\tensorflow/core/util/mkl_util.h(1252): note: see usage of 'dim' .\tensorflow/core/util/mkl_util.h(1256): error C3863: array type 'dnnl_dim_t [kNumDims]' is not assignable .\tensorflow/core/util/mkl_util.h(1257): error C3863: array type 'dnnl_dim_t [kNumDims]' is not assignable Target //tensorflow/tools/pip_package:build_pip_package failed to build INFO: Elapsed time: 2448.446s, Critical Path: 179.48s INFO: 4686 processes: 4686 local. FAILED: Build did NOT complete successfully

Expert73 commented 4 years ago

I think, problem now here DCHECK_EQ(dim.size(), strides.size());

ifdef ENABLE_MKLDNN_V1

const int kNumDims = dim.size(); mkldnn_dim_t input_dims[kNumDims]; mkldnn_dim_t input_strides[kNumDims]; for (int i = 0; i < kNumDims; ++i) { input_dims[i] = dim[i]; input_strides[i] = strides[i]; }

in old (work code) was: DCHECK_EQ(dim.size(), strides.size());

ifdef ENABLE_MKLDNN_V1

mkldnn_dim_t input_dims[dim.size()]; mkldnn_dim_t input_strides[dim.size()]; for (size_t i = 0; i < dim.size(); ++i) { input_dims[i] = dim[i]; input_strides[i] = strides[i]; }

what do you think? how to fix the problem?

khaled-besrour commented 4 years ago

I have the same problem with Tensorflow 2.2.0 and Visual Studio 2019. I solved the problem with a small patch, I don't know if a memory leak is possible or not like that.

#ifdef ENABLE_MKLDNN_V1
const int kNumDims = dim.size();
mkldnn_dim_t input_dims[kNumDims];
mkldnn_dim_t input_strides[kNumDims];
for (int i = 0; i < kNumDims; ++i) {
input_dims[i] = dim[i];
input_strides[i] = strides[i];
  try {
    mkldnn_memory_desc_init_by_strides(blocked_md, kNumDims, input_dims,
                                       memory::convert_to_c(dtype),
                                       input_strides);
}
......

become

#ifdef ENABLE_MKLDNN_V1
  const int kNumDims = dim.size();
  mkldnn_dim_t * input_dims = new mkldnn_dim_t[kNumDims];
  mkldnn_dim_t * input_strides = new mkldnn_dim_t[kNumDims];
  for (int i = 0; i < kNumDims; ++i) {
    input_dims[i] = dim[i];
    input_strides[i] = strides[i];
  }
  try {
    mkldnn_memory_desc_init_by_strides(blocked_md, kNumDims, input_dims,
                                       memory::convert_to_c(dtype),
                                       input_strides);
    delete[] input_dims;
    delete[] input_strides;
  } catch (mkldnn::error& e) {
    delete[] input_dims;
    delete[] input_strides;
    return Status(error::Code::INTERNAL,
                  tensorflow::strings::StrCat(
                      "Failed to create blocked memory descriptor.",
                      "Status: ", e.status, ", message: ", e.message));
  }

Now my compilation stop in /Eigen/src/Core/util/ReenableStupidWarnings.h I don't konw if there is a relation between my patch and this problem

edit 1:

No relation between the patch and my error, its a compilation error where it can't cast fromvector<long int> to vector<int64_t>

Expert73 commented 4 years ago

I think this is a new problem

Expert73 commented 4 years ago

ERROR: C:/tensorflow/tensorflow/core/kernels/BUILD:7897:1: C++ compilation of rule '//tensorflow/core/kernels:mkl_conv_op' failed (Exit 2) .\tensorflow/core/kernels/mkl_conv_ops.h(157): error C2679: binary '=': no operator found which takes a right-hand operand of type 'std::vector<long,std::allocator>' (or there is no acceptable conversion) C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.25.28610\include\vector(1175): note: could be 'std::vector<dnnl::memory::dim,std::allocator> &std::vector<dnnl::memory::dim,std::allocator>::operator =(std::initializer_list<_Ty>)' with [ _Ty=dnnl::memory::dim ] C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.25.28610\include\vector(1167): note: or 'std::vector<dnnl::memory::dim,std::allocator> &std::vector<dnnl::memory::dim,std::allocator>::operator =(const std::vector<dnnl::memory::dim,std::allocator> &)' C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.25.28610\include\vector(664): note: or 'std::vector<dnnl::memory::dim,std::allocator> &std::vector<dnnl::memory::dim,std::allocator>::operator =(std::vector<dnnl::memory::dim,std::allocator> &&) noexcept()' .\tensorflow/core/kernels/mkl_conv_ops.h(157): note: while trying to match the argument list '(dnnl::memory::dims, std::vector<long,std::allocator>)' .\tensorflow/core/kernels/mkl_conv_ops.h(182): error C2679: binary '=': no operator found which takes a right-hand operand of type 'std::vector<long,std::allocator>' (or there is no acceptable conversion) C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.25.28610\include\vector(1175): note: could be 'std::vector<dnnl::memory::dim,std::allocator> &std::vector<dnnl::memory::dim,std::allocator>::operator =(std::initializer_list<_Ty>)' with [ _Ty=dnnl::memory::dim ] C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.25.28610\include\vector(1167): note: or 'std::vector<dnnl::memory::dim,std::allocator> &std::vector<dnnl::memory::dim,std::allocator>::operator =(const std::vector<dnnl::memory::dim,std::allocator> &)' C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.25.28610\include\vector(664): note: or 'std::vector<dnnl::memory::dim,std::allocator> &std::vector<dnnl::memory::dim,std::allocator>::operator =(std::vector<dnnl::memory::dim,std::allocator> &&) noexcept()' .\tensorflow/core/kernels/mkl_conv_ops.h(182): note: while trying to match the argument list '(dnnl::memory::dims, std::vector<long,std::allocator>)' .\tensorflow/core/kernels/mkl_conv_ops.h(246): error C2679: binary '=': no operator found which takes a right-hand operand of type 'std::vector<long,std::allocator>' (or there is no acceptable conversion) C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.25.28610\include\vector(1175): note: could be 'std::vector<dnnl::memory::dim,std::allocator> &std::vector<dnnl::memory::dim,std::allocator>::operator =(std::initializer_list<_Ty>)' with [ _Ty=dnnl::memory::dim ] C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.25.28610\include\vector(1167): note: or 'std::vector<dnnl::memory::dim,std::allocator> &std::vector<dnnl::memory::dim,std::allocator>::operator =(const std::vector<dnnl::memory::dim,std::allocator> &)' C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.25.28610\include\vector(664): note: or 'std::vector<dnnl::memory::dim,std::allocator> &std::vector<dnnl::memory::dim,std::allocator>::operator =(std::vector<dnnl::memory::dim,std::allocator> &&) noexcept()' .\tensorflow/core/kernels/mkl_conv_ops.h(246): note: while trying to match the argument list '(dnnl::memory::dims, std::vector<long,std::allocator>)' .\tensorflow/core/kernels/mkl_conv_ops.h(254): error C2679: binary '=': no operator found which takes a right-hand operand of type 'std::vector<long,std::allocator>' (or there is no acceptable conversion) C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.25.28610\include\vector(1175): note: could be 'std::vector<dnnl::memory::dim,std::allocator> &std::vector<dnnl::memory::dim,std::allocator>::operator =(std::initializer_list<_Ty>)' with [ _Ty=dnnl::memory::dim ] C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.25.28610\include\vector(1167): note: or 'std::vector<dnnl::memory::dim,std::allocator> &std::vector<dnnl::memory::dim,std::allocator>::operator =(const std::vector<dnnl::memory::dim,std::allocator> &)' C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.25.28610\include\vector(664): note: or 'std::vector<dnnl::memory::dim,std::allocator> &std::vector<dnnl::memory::dim,std::allocator>::operator =(std::vector<dnnl::memory::dim,std::allocator> &&) noexcept()' .\tensorflow/core/kernels/mkl_conv_ops.h(254): note: while trying to match the argument list '(dnnl::memory::dims, std::vector<long,std::allocator>)' .\tensorflow/core/kernels/mkl_conv_ops.h(283): error C2679: binary '=': no operator found which takes a right-hand operand of type 'std::vector<long,std::allocator>' (or there is no acceptable conversion) C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.25.28610\include\vector(1175): note: could be 'std::vector<dnnl::memory::dim,std::allocator> &std::vector<dnnl::memory::dim,std::allocator>::operator =(std::initializer_list<_Ty>)' with [ _Ty=dnnl::memory::dim ] C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.25.28610\include\vector(1167): note: or 'std::vector<dnnl::memory::dim,std::allocator> &std::vector<dnnl::memory::dim,std::allocator>::operator =(const std::vector<dnnl::memory::dim,std::allocator> &)' C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.25.28610\include\vector(664): note: or 'std::vector<dnnl::memory::dim,std::allocator> &std::vector<dnnl::memory::dim,std::allocator>::operator =(std::vector<dnnl::memory::dim,std::allocator> &&) noexcept()' .\tensorflow/core/kernels/mkl_conv_ops.h(283): note: while trying to match the argument list '(dnnl::memory::dims, std::vector<long,std::allocator>)' .\tensorflow/core/kernels/mkl_conv_ops.h(474): error C2679: binary '=': no operator found which takes a right-hand operand of type 'std::vector<long,std::allocator>' (or there is no acceptable conversion) C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.25.28610\include\vector(1175): note: could be 'std::vector<dnnl::memory::dim,std::allocator> &std::vector<dnnl::memory::dim,std::allocator>::operator =(std::initializer_list<_Ty>)' with [ _Ty=dnnl::memory::dim ] C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.25.28610\include\vector(1167): note: or 'std::vector<dnnl::memory::dim,std::allocator> &std::vector<dnnl::memory::dim,std::allocator>::operator =(const std::vector<dnnl::memory::dim,std::allocator> &)' C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.25.28610\include\vector(664): note: or 'std::vector<dnnl::memory::dim,std::allocator> &std::vector<dnnl::memory::dim,std::allocator>::operator =(std::vector<dnnl::memory::dim,std::allocator> &&) noexcept' .\tensorflow/core/kernels/mkl_conv_ops.h(474): note: while trying to match the argument list '(dnnl::memory::dims, std::vector<long,std::allocator>)' .\tensorflow/core/kernels/mkl_conv_ops.h(482): error C2679: binary '=': no operator found which takes a right-hand operand of type 'std::vector<long,std::allocator>' (or there is no acceptable conversion) C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.25.28610\include\vector(1175): note: could be 'std::vector<dnnl::memory::dim,std::allocator> &std::vector<dnnl::memory::dim,std::allocator>::operator =(std::initializer_list<_Ty>)' with [ _Ty=dnnl::memory::dim ] C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.25.28610\include\vector(1167): note: or 'std::vector<dnnl::memory::dim,std::allocator> &std::vector<dnnl::memory::dim,std::allocator>::operator =(const std::vector<dnnl::memory::dim,std::allocator> &)' C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.25.28610\include\vector(664): note: or 'std::vector<dnnl::memory::dim,std::allocator> &std::vector<dnnl::memory::dim,std::allocator>::operator =(std::vector<dnnl::memory::dim,std::allocator> &&) noexcept' .\tensorflow/core/kernels/mkl_conv_ops.h(482): note: while trying to match the argument list '(dnnl::memory::dims, std::vector<long,std::allocator>)' Target //tensorflow/tools/pip_package:build_pip_package failed to build INFO: Elapsed time: 716.229s, Critical Path: 45.72s INFO: 2357 processes: 2357 local. FAILED: Build did NOT complete successfully

khaled-besrour commented 4 years ago

i solved it too by changing the type from long int to memozy::dim, it seem cleaner for me tensorflow/core/kernels/mkl_conv_ops.h

-#define MKLDNN_SIZE_DTYPE long int
+#define MKLDNN_SIZE_DTYPE memory::dim

Now the compilation takes many hours but don't want to end :(

Compiling tensorflow/core/kernels/mkl_cwise_ops_common.cc; 9752s local

Expert73 commented 4 years ago

I have i7-9700K For examlpe for TF 2.1 mkl_cwise_ops_common.cc compiled 16000s Сomplete code build lasted 7-8 hours.

Expert73 commented 4 years ago

without mkl Сomplete code build lasted 1-1,5 hours.

khaled-besrour commented 4 years ago

Thanks, i was thinking it was a bug and stoped the compilation after 11000s :( I have a I7-9750H Laptop, i think it will take 12-15H then. I compile tensorflow for CUDA 10.2 but just add MKL for the fun. it's worth it or not ?

I also created a pull request for the change https://github.com/tensorflow/tensorflow/pull/37785

Expert73 commented 4 years ago

stable gain of 3-5% when training models on my cpu.

Expert73 commented 4 years ago

Thanks a lot!

Expert73 commented 4 years ago

New error in next step.

Expert73 commented 4 years ago

ERROR: C:/tensorflow/tensorflow/lite/python/optimize/BUILD:50:1: Linking of rule '//tensorflow/lite/python/optimize:_tensorflow_lite_wrap_calibration_wrapper.so' failed (Exit 1120) LINK : warning LNK4044: unrecognized option '/ldl'; ignored LINK : warning LNK4044: unrecognized option '/lm'; ignored LINK : warning LNK4044: unrecognized option '/lpthread'; ignored mklml.lib(mklml.dll) : warning LNK4006: NULL_IMPORT_DESCRIPTOR already defined in libiomp5md.lib(libiomp5md.dll); second definition ignored Creating library bazel-out/x64_windows-opt/bin/tensorflow/lite/python/optimize/lib_tensorflow_lite_wrap_calibration_wrapper.so.ifso and object bazel-out/x64_windows-opt/bin/tensorflow/lite/python/optimize/lib_tensorflow_lite_wrap_calibration_wrapper.so.exp LINK : warning LNK4217: symbol '?DEVICE_CPU@tensorflow@@3QEBDEB (char const * const tensorflow::DEVICE_CPU)' defined in 'libtensor.lo(types.o)' is imported by 'libarithmetic_optimizer.a(arithmetic_optimizer.o)' in function '"bool cdecl tensorflow::grappler::anonymous namespace'::NodeIsOnCpu(class tensorflow::NodeDef const &)" (?NodeIsOnCpu@?A0x53e44b13@grappler@tensorflow@@YA_NAEBVNodeDef@3@@Z)' LINK : warning LNK4286: symbol '?DEVICE_CPU@tensorflow@@3QEBDEB (char const * const tensorflow::DEVICE_CPU)' defined in 'libtensor.lo(types.o)' is imported by 'libmemory_optimizer.a(memory_optimizer.o)' LINK : warning LNK4286: symbol '?DEVICE_CPU@tensorflow@@3QEBDEB (char const * const tensorflow::DEVICE_CPU)' defined in 'libtensor.lo(types.o)' is imported by 'libpin_to_host_optimizer.a(pin_to_host_optimizer.o)' LINK : warning LNK4286: symbol '?DEVICE_CPU@tensorflow@@3QEBDEB (char const * const tensorflow::DEVICE_CPU)' defined in 'libtensor.lo(types.o)' is imported by 'libutils.a(utils.o)' LINK : warning LNK4286: symbol '?DEVICE_GPU@tensorflow@@3QEBDEB (char const * const tensorflow::DEVICE_GPU)' defined in 'libtensor.lo(types.o)' is imported by 'libutils.a(utils.o)' LINK : warning LNK4217: symbol '?DEVICE_GPU@tensorflow@@3QEBDEB (char const * const tensorflow::DEVICE_GPU)' defined in 'libtensor.lo(types.o)' is imported by 'libarithmetic_optimizer.a(arithmetic_optimizer.o)' in function '"private: bool __cdecl tensorflow::grappler::anonymous namespace'::ReorderCastLikeAndValuePreserving::NodeIsOnCpuOrGpu(class tensorflow::NodeDef const )const " (?NodeIsOnCpuOrGpu@ReorderCastLikeAndValuePreserving@?A0x53e44b13@grappler@tensorflow@@AEBA_NPEBVNodeDef@4@@Z)' LINK : warning LNK4286: symbol '?DEVICE_GPU@tensorflow@@3QEBDEB (char const const tensorflow::DEVICE_GPU)' defined in 'libtensor.lo(types.o)' is imported by 'libauto_mixed_precision.a(auto_mixed_precision.o)' LINK : warning LNK4286: symbol '?DEVICE_GPU@tensorflow@@3QEBDEB (char const const tensorflow::DEVICE_GPU)' defined in 'libtensor.lo(types.o)' is imported by 'libmemory_optimizer.a(memory_optimizer.o)' LINK : warning LNK4286: symbol '?DEVICE_GPU@tensorflow@@3QEBDEB (char const const tensorflow::DEVICE_GPU)' defined in 'libtensor.lo(types.o)' is imported by 'libpin_to_host_optimizer.a(pin_to_host_optimizer.o)' LINK : warning LNK4217: symbol '?g_trace_level@internal@profiler@tensorflow@@3U?$atomic@H@std@@A (struct std::atomic tensorflow::profiler::internal::g_trace_level)' defined in 'libtraceme_recorder_impl.lo(traceme_recorder.o)' is imported by 'libbfc_allocator.a(bfc_allocator.o)' in function '"public: cdecl tensorflow::profiler::TraceMe::TraceMe<class >(class ,int)" (??$?0Vlambda_5d75ae6c1fc66f651c6900753282a5d3>@@@TraceMe@profiler@tensorflow@@QEAA@V<lambda_5d75ae6c1fc66f651c6900753282a5d3@@H@Z)' LINK : warning LNK4217: symbol '?ThenBlasGemm@Stream@stream_executor@@QEAAAEAV12@W4Transpose@blas@2@0_K11MAEBV?$DeviceMemory@M@2@H2HMPEAV52@H@Z (public: class stream_executor::Stream & __cdecl stream_executor::Stream::ThenBlasGemm(enum stream_executor::blas::Transpose,enum stream_executor::blas::Transpose,unsigned int64,unsigned int64,unsigned int64,float,class stream_executor::DeviceMemory const &,int,class stream_executor::DeviceMemory const &,int,float,class stream_executor::DeviceMemory ,int))' defined in 'libstream_executor_pimpl.a(stream.o)' is imported by 'libcudnn_plugin.lo(cuda_dnn.o)' in function '"public: virtual bool __cdecl stream_executor::gpu::CudnnSupport::DoMatMul(class stream_executor::Stream ,class stream_executor::DeviceMemory const &,class stream_executor::DeviceMemory const &,class stream_executor::dnn::BatchDescriptor const &,class stream_executor::dnn::BatchDescriptor const &,class stream_executor::DeviceMemory )" (?DoMatMul@CudnnSupport@gpu@stream_executor@@UEAA_NPEAVStream@3@AEBV?$DeviceMemory@M@3@1AEBVBatchDescriptor@dnn@3@2PEAV53@@Z)' libmkl_dnn.a(jit_utils.o) : error LNK2019: unresolved external symbol iJIT_GetNewMethodID referenced in function "void __cdecl dnnl::impl::cpu::jit_utils::register_jit_code(void const ,unsigned int64,char const ,char const )" (?register_jit_code@jit_utils@cpu@impl@dnnl@@YAXPEBX_KPEBD2@Z) libmkl_dnn.a(jit_utils.o) : error LNK2019: unresolved external symbol iJIT_IsProfilingActive referenced in function "void __cdecl dnnl::impl::cpu::jit_utils::register_jit_code(void const *,unsigned int64,char const ,char const )" (?register_jit_code@jit_utils@cpu@impl@dnnl@@YAXPEBX_KPEBD2@Z) libmkl_dnn.a(jit_utils.o) : error LNK2019: unresolved external symbol iJIT_NotifyEvent referenced in function "void __cdecl dnnl::impl::cpu::jit_utils::register_jit_code(void const ,unsigned __int64,char const ,char const *)" (?register_jit_code@jit_utils@cpu@impl@dnnl@@YAXPEBX_KPEBD2@Z) bazel-out\x64_windows-opt\bin\tensorflow\lite\python\optimize_tensorflow_lite_wrap_calibration_wrapper.so : fatal error LNK1120: 3 unresolved externals Target //tensorflow/tools/pip_package:build_pip_package failed to build INFO: Elapsed time: 6.072s, Critical Path: 4.15s INFO: 0 processes. FAILED: Build did NOT complete successfully

Expert73 commented 4 years ago

what might the error be related to?

khaled-besrour commented 4 years ago

I have a fix but it's not 100% clean. in the mkl-dnn/blob/master/src/cpu/jit_utils/, i copie the content of the sub folder jitprofiling. then i replace

#ifndef DNNL_ENABLE_JIT_PROFILING
#define DNNL_ENABLE_JIT_PROFILING 1
#endif

buy

#define DNNL_ENABLE_JIT_PROFILING 1

and

#include "jitprofiling/jitprofiling.h"

by

#include "jitprofiling.h"

i think there is no need to copy the content of the folder and edit the include just edit the define #define DNNL_ENABLE_JIT_PROFILING 1
will make it work

Expert73 commented 4 years ago

mkl-dnn/blob/master/src/cpu/jit_utils/

where is it?

khaled-besrour commented 4 years ago

my bad, i gived you the github folder path. i deleted my installation but i think it's in bazel temp folderin a folder named mkl_dnn_v1 or samething like this. make a find jit_utils in the tensorflow folder, windows will find it

edit 1 : tensorflow\bazel-tensorflow\external\mkl_dnn_v1\src\cpu\jit_utils

Expert73 commented 4 years ago

khaled-besrour, thanks! all work!

ravikyram commented 4 years ago

@Expert73

Please close this thread if it solves your question. Thanks!

google-ml-butler[bot] commented 4 years ago

Are you satisfied with the resolution of your issue? Yes No