apache / mxnet

Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
https://mxnet.apache.org
Apache License 2.0
20.78k stars 6.79k forks source link

centos 6.5 build failed -- /usr/bin/ld: cannot find -lnvrtc #1219

Closed mittlin closed 7 years ago

mittlin commented 8 years ago

Can anybody help me about this ???? I guess is something wrong about cuda. But caffe is OK, why mxnet not??

[ist@node-gpu build]$ cmake .. -- Found MKL (include: /opt/intel/mkl/include, lib: /opt/intel/mkl/lib/intel64/libmkl_rt.so -- CUDA detected: 7.0 -- Found cuDNN (include: /usr/local/cuda/include, library: /usr/local/cuda/lib64/libcudnn.so) -- Added CUDA NVCC flags for: sm_35 -- OpenCV found (/usr/local/share/OpenCV) -- Found cuDNN (include: /usr/local/cuda/include, library: /usr/local/cuda/lib64/libcudnn.so) -- Configuring done -- Generating done -- Build files have been written to: /home/ist/mxnet/build [ist@node-gpu build]$ make -j8 [ 1%] [ 2%] [ 4%] [ 4%] [ 5%] [ 6%] [ 7%] Building CXX object dmlc-core/CMakeFiles/dmlccore.dir/src/io.cc.o Building CXX object dmlc-core/CMakeFiles/dmlccore.dir/src/recordio.cc.o Building CXX object dmlc-core/CMakeFiles/dmlccore.dir/src/io/line_split.cc.o Building CXX object dmlc-core/CMakeFiles/dmlccore.dir/src/config.cc.o Building CXX object dmlc-core/CMakeFiles/dmlccore.dir/src/data.cc.o Building CXX object dmlc-core/CMakeFiles/dmlccore.dir/src/io/recordio_split.cc.o Building CXX object dmlc-core/CMakeFiles/dmlccore.dir/src/io/input_split_base.cc.o [ 8%] Building CXX object dmlc-core/CMakeFiles/dmlccore.dir/src/io/local_filesys.cc.o In file included from /home/ist/mxnet/dmlc-core/src/data/././text_parser.h:11:0, from /home/ist/mxnet/dmlc-core/src/data/./libsvm_parser.h:13, from /home/ist/mxnet/dmlc-core/src/data/disk_row_iter.h:19, from /home/ist/mxnet/dmlc-core/src/data.cc:12: /home/ist/mxnet/dmlc-core/include/dmlc/omp.h:15:81: note: #pragma message: Warning: OpenMP is not available, project will be compiled into single-thread code. Use OpenMP-enabled compiler to get benefit of multi-threading. "Use OpenMP-enabled compiler to get benefit of multi-threading.") ^ Linking CXX static library libdmlccore.a [ 8%] Built target dmlccore [ 9%] [ 10%] [ 11%] [ 12%] [ 13%] [ 15%] [ 16%] Building NVCC (Device) object CMakeFiles/cuda_compile.dir/src/ndarray/./cuda_compile_generated_unary_function.cu.o Building NVCC (Device) object CMakeFiles/cuda_compile.dir/src/operator/./cuda_compile_generated_elementwise_binary_scalar_op.cu.o [ 17%] Building NVCC (Device) object CMakeFiles/cuda_compile.dir/src/operator/./cuda_compile_generated_concat.cu.o Building NVCC (Device) object CMakeFiles/cuda_compile.dir/src/operator/./cuda_compile_generated_block_grad.cu.o Building NVCC (Device) object CMakeFiles/cuda_compile.dir/src/operator/./cuda_compile_generated_fully_connected.cu.o Building NVCC (Device) object CMakeFiles/cuda_compile.dir/src/operator/./cuda_compile_generated_softmax_activation.cu.o Building NVCC (Device) object CMakeFiles/cuda_compile.dir/src/operator/./cuda_compile_generated_activation.cu.o Building NVCC (Device) object CMakeFiles/cuda_compile.dir/src/operator/./cuda_compile_generated_reshape.cu.o /home/ist/mxnet/src/ndarray/./../operator/mshadow_op.h(205): warning: floating-point value does not fit in required integral type detected during: instantiation of "DType mxnet::op::mshadow_op::sign::Map(DType) [with DType=uint8_t]" /home/ist/mxnet/mshadow/mshadow/././expr_engine-inl.h(127): here instantiation of "DType mshadow::expr::Plan<mshadow::expr::UnaryMapExp<OP, TA, DType, etype>, DType>::Eval(mshadow::index_t, mshadow::index_t) const [with OP=mxnet::op::mshadow_op::sign, TA=mshadow::Tensor<mxnet::gpu, 2, uint8_t>, etype=1, DType=uint8_t]" /home/ist/mxnet/mshadow/mshadow/././expr_engine-inl.h(114): here instantiation of "DType mshadow::expr::Plan<mshadow::expr::BinaryMapExp<OP, TA, TB, DType, etype>, DType>::Eval(mshadow::index_t, mshadow::index_t) const [with OP=mshadow::op::mul, TA=mshadow::expr::UnaryMapExp<mxnet::op::mshadow_op::sign, mshadow::Tensor<mxnet::gpu, 2, uint8_t>, uint8_t, 1>, TB=mshadow::Tensor<mxnet::gpu, 2, uint8_t>, etype=1, DType=uint8_t]" /home/ist/mxnet/mshadow/mshadow/././cuda/tensor_gpu-inl.cuh(61): here instantiation of "void mshadow::cuda::MapPlanProc<Saver,DstPlan,Plan,block_dim_bits>(DstPlan, mshadow::index_t, mshadow::Shape<2>, Plan, int) [with Saver=mshadow::sv::saveto, DstPlan=mshadow::expr::Plan<mshadow::Tensor<mxnet::gpu, 2, uint8_t>, uint8_t>, Plan=mshadow::expr::Plan<mshadow::expr::BinaryMapExp<mshadow::op::mul, mshadow::expr::UnaryMapExp<mxnet::op::mshadow_op::sign, mshadow::Tensor<mxnet::gpu, 2, uint8_t>, uint8_t, 1>, mshadow::Tensor<mxnet::gpu, 2, uint8_t>, uint8_t, 1>, uint8_t>, block_dim_bits=8]" /home/ist/mxnet/mshadow/mshadow/././cuda/tensor_gpu-inl.cuh(69): here instantiation of "void mshadow::cuda::MapPlanKernel<Saver,block_dim_bits,DstPlan,Plan>(DstPlan, mshadow::index_t, mshadow::Shape<2>, Plan) [with Saver=mshadow::sv::saveto, block_dim_bits=8, DstPlan=mshadow::expr::Plan<mshadow::Tensor<mxnet::gpu, 2, uint8_t>, uint8_t>, Plan=mshadow::expr::Plan<mshadow::expr::BinaryMapExp<mshadow::op::mul, mshadow::expr::UnaryMapExp<mxnet::op::mshadow_op::sign, mshadow::Tensor<mxnet::gpu, 2, uint8_t>, uint8_t, 1>, mshadow::Tensor<mxnet::gpu, 2, uint8_t>, uint8_t, 1>, uint8_t>]" /home/ist/mxnet/mshadow/mshadow/././cuda/tensor_gpu-inl.cuh(95): here instantiation of "void mshadow::cuda::MapPlan<Saver,DstExp,E,DType>(mshadow::expr::Plan<DstExp, DType>, const mshadow::expr::Plan<E, DType> &, mshadow::Shape<2>, cudaStream_t) [with Saver=mshadow::sv::saveto, DstExp=mshadow::Tensor<mxnet::gpu, 2, uint8_t>, E=mshadow::expr::BinaryMapExp<mshadow::op::mul, mshadow::expr::UnaryMapExp<mxnet::op::mshadow_op::sign, mshadow::Tensor<mxnet::gpu, 2, uint8_t>, uint8_t, 1>, mshadow::Tensor<mxnet::gpu, 2, uint8_t>, uint8_t, 1>, DType=uint8_t]" /home/ist/mxnet/mshadow/mshadow/./tensor_gpu-inl.h(114): here instantiation of "void mshadow::MapExp<Saver,R,dim,DType,E,etype>(mshadow::TRValue<R, mshadow::gpu, dim, DType> , const mshadow::expr::Exp<E, DType, etype> &) [with Saver=mshadow::sv::saveto, R=mshadow::Tensor<mxnet::gpu, 2, uint8_t>, dim=2, DType=uint8_t, E=mshadow::expr::BinaryMapExp<mshadow::op::mul, mshadow::expr::UnaryMapExp<mxnet::op::mshadow_op::sign, mshadow::Tensor<mxnet::gpu, 2, uint8_t>, uint8_t, 1>, mshadow::Tensor<mxnet::gpu, 2, uint8_t>, uint8_t, 1>, etype=1]" /home/ist/mxnet/mshadow/mshadow/././expr_engine-inl.h(389): here instantiation of "void mshadow::expr::ExpEngine<SV, RV, DType>::Eval(RV , const mshadow::expr::Exp<E, DType, 1> &) [with SV=mshadow::sv::saveto, RV=mshadow::Tensor<mxnet::gpu, 2, uint8_t>, DType=uint8_t, E=mshadow::expr::BinaryMapExp<mshadow::op::mul, mshadow::expr::UnaryMapExp<mxnet::op::mshadow_op::sign, mshadow::Tensor<mxnet::gpu, 2, uint8_t>, uint8_t, 1>, mshadow::Tensor<mxnet::gpu, 2, uint8_t>, uint8_t, 1>]" /home/ist/mxnet/mshadow/mshadow/./expression.h(168): here instantiation of "Container &mshadow::expr::RValueExp<Container, DType>::__assign(const mshadow::expr::Exp<E, DType, etype> &) [with Container=mshadow::Tensor<mxnet::gpu, 2, uint8_t>, DType=uint8_t, E=mshadow::expr::BinaryMapExp<mshadow::op::mul, mshadow::expr::UnaryMapExp<mxnet::op::mshadow_op::sign, mshadow::Tensor<mxnet::gpu, 2, uint8_t>, uint8_t, 1>, mshadow::Tensor<mxnet::gpu, 2, uint8_t>, uint8_t, 1>, etype=1]" /home/ist/mxnet/mshadow/mshadow/tensor.h(408): here instantiation of "mshadow::Tensor<Device, dimension, DType> &mshadow::Tensor<Device, dimension, DType>::operator=(const mshadow::expr::Exp<E, DType, etype> &) [with Device=mxnet::gpu, dimension=2, DType=uint8_t, E=mshadow::expr::BinaryMapExp<mshadow::op::mul, mshadow::expr::UnaryMapExp<mxnet::op::mshadow_op::sign, mshadow::Tensor<mxnet::gpu, 2, uint8_t>, uint8_t, 1>, mshadow::Tensor<mxnet::gpu, 2, uint8_t>, uint8_t, 1>, etype=1]" /home/ist/mxnet/src/ndarray/./unaryfunction-inl.h(54): here instantiation of "void mxnet::ndarray::UnaryBackwardUseIn<xpu,OP>(const mxnet::common::arg::OutGrad &, const mxnet::common::arg::Input0 &, mxnet::TBlob *, mxnet::OpReqType, mxnet::RunContext) [with xpu=mxnet::gpu, OP=mxnet::op::mshadow_op::sign]" /home/ist/mxnet/src/ndarray/./unary_function-inl.h(148): here

[ 18%] Building NVCC (Device) object CMakeFiles/cuda_compile.dir/src/operator/./cuda_compile_generated_cudnn_batch_norm.cu.o [ 19%] Building NVCC (Device) object CMakeFiles/cuda_compile.dir/src/operator/./cuda_compile_generated_elementwise_sum.cu.o [ 20%] Building NVCC (Device) object CMakeFiles/cuda_compile.dir/src/operator/./cuda_compile_generated_crop.cu.o [ 21%] Building NVCC (Device) object CMakeFiles/cuda_compile.dir/src/operator/./cuda_compile_generated_identity_attach_KL_sparse_reg.cu.o [ 22%] Building NVCC (Device) object CMakeFiles/cuda_compile.dir/src/operator/./cuda_compile_generated_elementwise_binary_op.cu.o [ 23%] Building NVCC (Device) object CMakeFiles/cuda_compile.dir/src/operator/./cuda_compile_generated_batch_norm.cu.o [ 24%] Building NVCC (Device) object CMakeFiles/cuda_compile.dir/src/operator/./cuda_compile_generated_softmax_output.cu.o [ 25%] Building NVCC (Device) object CMakeFiles/cuda_compile.dir/src/operator/./cuda_compile_generated_leaky_relu.cu.o [ 26%] Building NVCC (Device) object CMakeFiles/cuda_compile.dir/src/operator/./cuda_compile_generated_pooling.cu.o [ 27%] Building NVCC (Device) object CMakeFiles/cuda_compile.dir/src/operator/./cuda_compile_generated_dropout.cu.o [ 29%] Building NVCC (Device) object CMakeFiles/cuda_compile.dir/src/operator/./cuda_compile_generated_native_op.cu.o [ 30%] Building NVCC (Device) object CMakeFiles/cuda_compile.dir/src/operator/./cuda_compile_generated_deconvolution.cu.o [ 31%] Building NVCC (Device) object CMakeFiles/cuda_compile.dir/src/operator/./cuda_compile_generated_cast.cu.o [ 32%] Building NVCC (Device) object CMakeFiles/cuda_compile.dir/src/operator/./cuda_compile_generated_upsampling.cu.o [ 33%] Building NVCC (Device) object CMakeFiles/cuda_compile.dir/src/operator/./cuda_compile_generated_lrn.cu.o [ 34%] Building NVCC (Device) object CMakeFiles/cuda_compile.dir/src/operator/./cuda_compile_generated_embedding.cu.o [ 35%] Building NVCC (Device) object CMakeFiles/cuda_compile.dir/src/operator/./cuda_compile_generated_swapaxis.cu.o [ 36%] Building NVCC (Device) object CMakeFiles/cuda_compile.dir/src/operator/./cuda_compile_generated_slice_channel.cu.o [ 37%] Building NVCC (Device) object CMakeFiles/cuda_compile.dir/src/operator/./cuda_compile_generated_regression_output.cu.o [ 38%] Building NVCC (Device) object CMakeFiles/cuda_compile.dir/src/operator/./cuda_compile_generated_convolution.cu.o [ 39%] Building NVCC (Device) object CMakeFiles/cuda_compile.dir/src/optimizer/./cuda_compile_generated_sgd.cu.o [ 40%] Building NVCC (Device) object CMakeFiles/cuda_compile.dir/src/ndarray/./cuda_compile_generated_ndarray_function.cu.o [ 41%] [ 43%] [ 44%] [ 45%] [ 46%] [ 47%] Building CXX object CMakeFiles/mxnet.dir/src/engine/engine.cc.o [ 48%] [ 49%] Building CXX object CMakeFiles/mxnet.dir/src/engine/threaded_engine_perdevice.cc.o Building CXX object CMakeFiles/mxnet.dir/src/engine/threaded_engine_pooled.cc.o Building CXX object CMakeFiles/mxnet.dir/src/engine/naive_engine.cc.o Building CXX object CMakeFiles/mxnet.dir/src/engine/threaded_engine.cc.o Building CXX object CMakeFiles/mxnet.dir/src/operator/reshape.cc.o Building CXX object CMakeFiles/mxnet.dir/src/operator/operator.cc.o Building CXX object CMakeFiles/mxnet.dir/src/operator/deconvolution.cc.o [ 50%] Building CXX object CMakeFiles/mxnet.dir/src/operator/elementwise_binary_scalar_op.cc.o [ 51%] Building CXX object CMakeFiles/mxnet.dir/src/operator/softmax_output.cc.o [ 52%] Building CXX object CMakeFiles/mxnet.dir/src/operator/native_op.cc.o [ 53%] Building CXX object CMakeFiles/mxnet.dir/src/operator/swapaxis.cc.o [ 54%] Building CXX object CMakeFiles/mxnet.dir/src/operator/cast.cc.o [ 55%] Building CXX object CMakeFiles/mxnet.dir/src/operator/embedding.cc.o [ 56%] Building CXX object CMakeFiles/mxnet.dir/src/operator/convolution.cc.o [ 58%] Building CXX object CMakeFiles/mxnet.dir/src/operator/elementwise_sum.cc.o [ 59%] Building CXX object CMakeFiles/mxnet.dir/src/operator/block_grad.cc.o [ 60%] Building CXX object CMakeFiles/mxnet.dir/src/operator/activation.cc.o [ 61%] Building CXX object CMakeFiles/mxnet.dir/src/operator/cudnn_batch_norm.cc.o [ 62%] Building CXX object CMakeFiles/mxnet.dir/src/operator/softmax_activation.cc.o [ 63%] Building CXX object CMakeFiles/mxnet.dir/src/operator/lrn.cc.o [ 64%] Building CXX object CMakeFiles/mxnet.dir/src/operator/dropout.cc.o [ 65%] Building CXX object CMakeFiles/mxnet.dir/src/operator/identity_attach_KL_sparse_reg.cc.o [ 66%] Building CXX object CMakeFiles/mxnet.dir/src/operator/pooling.cc.o [ 67%] Building CXX object CMakeFiles/mxnet.dir/src/operator/concat.cc.o [ 68%] Building CXX object CMakeFiles/mxnet.dir/src/operator/fully_connected.cc.o [ 69%] Building CXX object CMakeFiles/mxnet.dir/src/operator/slice_channel.cc.o [ 70%] Building CXX object CMakeFiles/mxnet.dir/src/operator/batch_norm.cc.o [ 72%] Building CXX object CMakeFiles/mxnet.dir/src/operator/upsampling.cc.o [ 73%] Building CXX object CMakeFiles/mxnet.dir/src/operator/crop.cc.o [ 74%] Building CXX object CMakeFiles/mxnet.dir/src/operator/ndarray_op.cc.o [ 75%] Building CXX object CMakeFiles/mxnet.dir/src/operator/regression_output.cc.o [ 76%] Building CXX object CMakeFiles/mxnet.dir/src/operator/elementwise_binary_op.cc.o [ 77%] Building CXX object CMakeFiles/mxnet.dir/src/operator/leaky_relu.cc.o [ 78%] Building CXX object CMakeFiles/mxnet.dir/src/operator/cross_device_copy.cc.o [ 79%] Building CXX object CMakeFiles/mxnet.dir/src/storage/storage.cc.o [ 80%] Building CXX object CMakeFiles/mxnet.dir/src/kvstore/kvstore.cc.o [ 81%] Building CXX object CMakeFiles/mxnet.dir/src/symbol/graph_executor.cc.o [ 82%] Building CXX object CMakeFiles/mxnet.dir/src/symbol/static_graph.cc.o [ 83%] Building CXX object CMakeFiles/mxnet.dir/src/symbol/symbol.cc.o [ 84%] Building CXX object CMakeFiles/mxnet.dir/src/resource.cc.o [ 86%] Building CXX object CMakeFiles/mxnet.dir/src/common/tblob_op_registry.cc.o [ 87%] Building CXX object CMakeFiles/mxnet.dir/src/common/mxrtc.cc.o [ 88%] Building CXX object CMakeFiles/mxnet.dir/src/io/iter_image_recordio.cc.o [ 89%] Building CXX object CMakeFiles/mxnet.dir/src/io/io.cc.o [ 90%] Building CXX object CMakeFiles/mxnet.dir/src/io/iter_csv.cc.o [ 91%] Building CXX object CMakeFiles/mxnet.dir/src/io/iter_mnist.cc.o [ 92%] Building CXX object CMakeFiles/mxnet.dir/src/c_api/c_api_error.cc.o [ 93%] Building CXX object CMakeFiles/mxnet.dir/src/c_api/c_predict_api.cc.o [ 94%] Building CXX object CMakeFiles/mxnet.dir/src/c_api/c_api.cc.o [ 95%] Building CXX object CMakeFiles/mxnet.dir/src/optimizer/optimizer.cc.o [ 96%] Building CXX object CMakeFiles/mxnet.dir/src/optimizer/sgd.cc.o [ 97%] Building CXX object CMakeFiles/mxnet.dir/src/ndarray/ndarray.cc.o [ 98%] Building CXX object CMakeFiles/mxnet.dir/src/ndarray/unary_function.cc.o [100%] Building CXX object CMakeFiles/mxnet.dir/src/ndarray/ndarray_function.cc.o Linking CXX shared library liblibmxnet.so /usr/bin/ld: cannot find -lnvrtc collect2: error: ld returned 1 exit status make[2]: * [liblibmxnet.so] Error 1 make[1]: * [CMakeFiles/mxnet.dir/all] Error 2 make: *\ [all] Error 2

qiaohaijun commented 8 years ago

can you execute the 'locate libvrtc.so'

piiswrong commented 8 years ago

set USE_NVRTC=0 in config.mk

mittlin commented 8 years ago

Thanks. Use config.mk and it works

wangdelp commented 8 years ago

@mittlin @piiswrong Hi, I am getting the same error following the same steps, i.e. 1) mkdir build 2)cd build 3) cmake .. 4) make -j72. I've checked that the USE_NVRTC is already set as zero at the time I pull it from github. I am not sure where should I put the config.mk file, just copy it under root directory(~/mxnet), or copy it into the build(~/build/mxnet) directory? Do you guys have any clue how to fix this? Thank you.

/home/xeraph/mxnet/src/operator/./upsampling-inl.h: In instantiation of ‘void mxnet::op::UpSamplingNearestOp::Backward(const mxnet::OpContext&, const std::vectormshadow::TBlob&, const std::vectormshadow::TBlob&, const std::vectormshadow::TBlob&, const std::vectormxnet::OpReqType&, const std::vectormshadow::TBlob&, const std::vectormshadow::TBlob&) [with xpu = mshadow::cpu]’: /home/xeraph/mxnet/src/operator/upsampling.cc:51:1: required from here /home/xeraph/mxnet/dmlc-core/include/dmlc/logging.h:75:34: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]

define CHECK_EQ(x, y) CHECK((x) == (y))

                              ^

/home/xeraph/mxnet/dmlc-core/include/dmlc/logging.h:68:9: note: in definition of macro ‘CHECK’ if (!(x)) \ ^ /home/xeraph/mxnet/src/operator/./upsampling-inl.h:116:5: note: in expansion of macro ‘CHECK_EQ’ CHECK_EQ(ingrad.size(), param.num_args); ^ Linking CXX shared library liblibmxnet.so /usr/bin/ld: cannot find -lnvrtc collect2: error: ld returned 1 exit status make[2]: * [liblibmxnet.so] Error 1 make[1]: * [CMakeFiles/mxnet.dir/all] Error 2 make: *\ [all] Error 2

mittlin commented 8 years ago

copy config.mk under root directory(~/mxnet). this is my config.mk.

-------------------------------------------------------------------------------

Template configuration for compiling mxnet

#

If you want to change the configuration, please use the following

steps. Assume you are on the root directory of mxnet. First copy the this

file so that any local changes will be ignored by git

#

$ cp make/config.mk .

#

Next modify the according entries, and then compile by

#

$ make

#

or build in parallel with 8 threads

#

$ make -j8

-------------------------------------------------------------------------------

---------------------

choice of compiler

--------------------

export CC = gcc export CXX = g++ export NVCC = nvcc

whether compile with debug

DEBUG = 0

the additional link flags you want to add

ADD_LDFLAGS =

the additional compile flags you want to add

ADD_CFLAGS =

---------------------------------------------

matrix computation libraries for CPU/GPU

---------------------------------------------

whether use CUDA during compile

USE_CUDA = 1

add the path to CUDA libary to link and compile flag

if you have already add them to enviroment variable, leave it as NONE

USE_CUDA_PATH = /usr/local/cuda

USE_CUDA_PATH = NONE

whether use CUDNN R3 library

USE_CUDNN = 1

whether use cuda runtime compiling for writing kernels in native language

(i.e. Python) USE_NVRTC = 0

whether use opencv during compilation

you can disable it, however, you will not able to use

imbin iterator

USE_OPENCV = 1

use openmp for parallelization

USE_OPENMP = 1

choose the version of blas you want to use

can be: mkl, blas, atlas, openblas

in default use atlas for linux while apple for osx

UNAME_S := $(shell uname -s) ifeq ($(UNAME_S), Darwin) USE_BLAS = apple else USE_BLAS = mkl endif

add path to intel libary, you may need it for MKL, if you did not add the

path

to enviroment variable

USE_INTEL_PATH = /opt/intel

If use MKL, choose static link automaticly to allow python wrapper

ifeq ($(USE_BLAS), mkl) USE_STATIC_MKL = 1 else USE_STATIC_MKL = NONE endif

----------------------------

distributed computing

----------------------------

whether or not to enable mullti-machine supporting

USE_DIST_KVSTORE = 0

whether or not allow to read and write HDFS directly. If yes, then hadoop

is

required

USE_HDFS = 0

path to libjvm.so. required if USE_HDFS=1

LIBJVM=$(JAVA_HOME)/jre/lib/amd64/server

whether or not allow to read and write AWS S3 directly. If yes, then

libcurl4-openssl-dev is required, it can be installed on Ubuntu by

sudo apt-get install -y libcurl4-openssl-dev

USE_S3 = 0

----------------------------

additional operators

----------------------------

path to folders containing projects specific operators that you don't

want to put in src/operators EXTRA_OPERATORS =

2016-01-27 9:39 GMT+08:00 LoneStar notifications@github.com:

@mittlin https://github.com/mittlin @piiswrong https://github.com/piiswrong Hi, I am getting the same error following the same steps, i.e. 1) mkdir build 2)cd build 3) cmake .. 4) make -j72. I've checked that the USE_NVRTC is already set as zero at the time I pull it from github. I am not sure where should I put the config.mk file, just copy it under root directory(~/mxnet), or copy it into the build(~/build/mxnet) directory? Do you guys have any clue how to fix this? Thank you.

/home/xeraph/mxnet/src/operator/./upsampling-inl.h: In instantiation of ‘void mxnet::op::UpSamplingNearestOp::Backward(const mxnet::OpContext&, const std::vectormshadow::TBlob&, const std::vectormshadow::TBlob&, const std::vectormshadow::TBlob&, const std::vectormxnet::OpReqType&, const std::vectormshadow::TBlob&, const std::vectormshadow::TBlob&) [with xpu = mshadow::cpu]’: /home/xeraph/mxnet/src/operator/upsampling.cc:51:1: required from here /home/xeraph/mxnet/dmlc-core/include/dmlc/logging.h:75:34: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]

define CHECK_EQ(x, y) CHECK((x) == (y))

^ /home/xeraph/mxnet/dmlc-core/include/dmlc/logging.h:68:9: note: in definition of macro ‘CHECK’ if (!(x)) \ ^ /home/xeraph/mxnet/src/operator/./upsampling-inl.h:116:5: note: in expansion of macro ‘CHECK_EQ’ CHECK_EQ(ingrad.size(), param.num_args); ^ Linking CXX shared library liblibmxnet.so /usr/bin/ld: cannot find -lnvrtc collect2: error: ld returned 1 exit status make[2]: * [liblibmxnet.so] Error 1 make[1]: * [CMakeFiles/mxnet.dir/all] Error 2 make: *\ [all] Error 2

— Reply to this email directly or view it on GitHub https://github.com/dmlc/mxnet/issues/1219#issuecomment-175333820.

piiswrong commented 8 years ago

@wangdelp copy it to mxnet root. If still see the same error, try pull the latest code.

wangdelp commented 8 years ago

@mittlin @piiswrong hmm, my code is up-to-date when I run git pull. I've copied your config.mk literally under ~/mxnet directory but still got the same error. It's still looking for the libvrtc.so library...

I've solved it by adding the path containing the .so file in env variable LIBRARY_PATH: export LIBRARY_PATH=/usr/local/cuda-7.0/targets/x86_64-linux/lib

wangdelp commented 8 years ago

@piiswrong It seems to me that using the NVRTC library would lead to better performance from the following page. http://docs.nvidia.com/cuda/nvrtc/index.html#axzz3zJaU1Prj

Do you think it's good to make the installation tutorial compatible with setting USE_NVRTC=1? It should be easy since this library is included in the latest CUDA release, just need to set the environment variable. thx

hariag commented 8 years ago

setting USE_NVRTC=1 export LIBRARY_PATH=$LIBRARY_PATH:/usr/local/cuda/lib64 works for me

diPDew commented 8 years ago

@wangdelp, can I know if you've solved the nvrtc issue when using CMake?

phunterlau commented 7 years ago

This issue is closed due to lack of activity in the last 90 days. Feel free to reopen if this is still an active issue. Thanks!