facebookarchive / caffe2

Caffe2 is a lightweight, modular, and scalable deep learning framework.
https://caffe2.ai
Apache License 2.0
8.42k stars 1.95k forks source link

error: mismatched argument pack lengths while expanding ‘std::is_constructible #1898

Open elcou opened 6 years ago

elcou commented 6 years ago

Hi, I'm trying to build caffe2 with GPU support. cmake configuration runs fine but then when building I get the output as below. Can someone help me with that please ?

Thanks a lot !

System information

CMake summary output

******** Summary ********
[ 45%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_abs_op.cu.o
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<long unsigned int&, long unsigned int&, long unsigned int&>}; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’:
/usr/include/c++/6/tuple:626:248:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), long unsigned int&, long unsigned int&, long unsigned int&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<long unsigned int&, long unsigned int&, long unsigned int&>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), long unsigned int&, long unsigned int&, long unsigned int&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]’
/usr/include/c++/6/tuple:1545:43:   required from ‘constexpr std::tuple<_Elements& ...> std::tie(_Elements& ...) [with _Elements = {long unsigned int, long unsigned int, long unsigned int}]’
/home/elodie/caffe2/caffe2/operators/elementwise_op.h:241:10:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::DoRunWithType() [with T = float; InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/home/elodie/caffe2/caffe2/core/operator.h:640:80:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::TypeMeta&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/elodie/caffe2/caffe2/core/operator.h:642:47:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::Tensor<Context>&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; Context = caffe2::CUDAContext; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/elodie/caffe2/caffe2/operators/elementwise_op.h:215:42:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::RunOnDevice() [with InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/tmp/tmpxft_00004713_00000000-5_abs_op.cudafe1.stub.c:20:27:   required from here
/usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’
       return __and_<is_constructible<_Elements, _UElements&&>...>::value;
                                                                   ^~~~~
/usr/include/c++/6/tuple:484:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<long unsigned int&, long unsigned int&, long unsigned int&>}; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<long unsigned int&, long unsigned int&, long unsigned int&>}; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’:
/usr/include/c++/6/tuple:626:362:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), long unsigned int&, long unsigned int&, long unsigned int&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<long unsigned int&, long unsigned int&, long unsigned int&>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), long unsigned int&, long unsigned int&, long unsigned int&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]’
/usr/include/c++/6/tuple:1545:43:   required from ‘constexpr std::tuple<_Elements& ...> std::tie(_Elements& ...) [with _Elements = {long unsigned int, long unsigned int, long unsigned int}]’
/home/elodie/caffe2/caffe2/operators/elementwise_op.h:241:10:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::DoRunWithType() [with T = float; InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/home/elodie/caffe2/caffe2/core/operator.h:640:80:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::TypeMeta&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/elodie/caffe2/caffe2/core/operator.h:642:47:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::Tensor<Context>&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; Context = caffe2::CUDAContext; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/elodie/caffe2/caffe2/operators/elementwise_op.h:215:42:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::RunOnDevice() [with InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/tmp/tmpxft_00004713_00000000-5_abs_op.cudafe1.stub.c:20:27:   required from here
/usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’
       return __and_<is_convertible<_UElements&&, _Elements>...>::value;
                                                                 ^~~~~
/usr/include/c++/6/tuple:490:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<long unsigned int&, long unsigned int&, long unsigned int&>}; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<long unsigned int&, long unsigned int&, long unsigned int&>&; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’:
/usr/include/c++/6/tuple:662:419:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), long unsigned int&, long unsigned int&, long unsigned int&>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {long unsigned int&, long unsigned int&, long unsigned int&}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), long unsigned int&, long unsigned int&, long unsigned int&>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> = <missing>]’
/usr/include/c++/6/tuple:1545:43:   required from ‘constexpr std::tuple<_Elements& ...> std::tie(_Elements& ...) [with _Elements = {long unsigned int, long unsigned int, long unsigned int}]’
/home/elodie/caffe2/caffe2/operators/elementwise_op.h:241:10:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::DoRunWithType() [with T = float; InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/home/elodie/caffe2/caffe2/core/operator.h:640:80:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::TypeMeta&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/elodie/caffe2/caffe2/core/operator.h:642:47:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::Tensor<Context>&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; Context = caffe2::CUDAContext; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/elodie/caffe2/caffe2/operators/elementwise_op.h:215:42:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::RunOnDevice() [with InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/tmp/tmpxft_00004713_00000000-5_abs_op.cudafe1.stub.c:20:27:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<long unsigned int&, long unsigned int&, long unsigned int&>&; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<long unsigned int&, long unsigned int&, long unsigned int&>&&; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’:
/usr/include/c++/6/tuple:686:422:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), long unsigned int&, long unsigned int&, long unsigned int&>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {long unsigned int&, long unsigned int&, long unsigned int&}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), long unsigned int&, long unsigned int&, long unsigned int&>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> = <missing>]’
/usr/include/c++/6/tuple:1545:43:   required from ‘constexpr std::tuple<_Elements& ...> std::tie(_Elements& ...) [with _Elements = {long unsigned int, long unsigned int, long unsigned int}]’
/home/elodie/caffe2/caffe2/operators/elementwise_op.h:241:10:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::DoRunWithType() [with T = float; InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/home/elodie/caffe2/caffe2/core/operator.h:640:80:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::TypeMeta&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/elodie/caffe2/caffe2/core/operator.h:642:47:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::Tensor<Context>&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; Context = caffe2::CUDAContext; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/elodie/caffe2/caffe2/operators/elementwise_op.h:215:42:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::RunOnDevice() [with InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/tmp/tmpxft_00004713_00000000-5_abs_op.cudafe1.stub.c:20:27:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<long unsigned int&, long unsigned int&, long unsigned int&>&&; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<long unsigned int, long unsigned int, long unsigned int>}; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’:
/usr/include/c++/6/tuple:626:248:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), long unsigned int&, long unsigned int&, long unsigned int&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<long unsigned int, long unsigned int, long unsigned int>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), long unsigned int&, long unsigned int&, long unsigned int&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]’
/home/elodie/caffe2/caffe2/operators/elementwise_op.h:241:26:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::DoRunWithType() [with T = float; InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/home/elodie/caffe2/caffe2/core/operator.h:640:80:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::TypeMeta&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/elodie/caffe2/caffe2/core/operator.h:642:47:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::Tensor<Context>&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; Context = caffe2::CUDAContext; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/elodie/caffe2/caffe2/operators/elementwise_op.h:215:42:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::RunOnDevice() [with InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/tmp/tmpxft_00004713_00000000-5_abs_op.cudafe1.stub.c:20:27:   required from here
/usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’
       return __and_<is_constructible<_Elements, _UElements&&>...>::value;
                                                                   ^~~~~
/usr/include/c++/6/tuple:484:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<long unsigned int, long unsigned int, long unsigned int>}; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<long unsigned int, long unsigned int, long unsigned int>}; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’:
/usr/include/c++/6/tuple:626:362:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), long unsigned int&, long unsigned int&, long unsigned int&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<long unsigned int, long unsigned int, long unsigned int>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), long unsigned int&, long unsigned int&, long unsigned int&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]’
/home/elodie/caffe2/caffe2/operators/elementwise_op.h:241:26:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::DoRunWithType() [with T = float; InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/home/elodie/caffe2/caffe2/core/operator.h:640:80:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::TypeMeta&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/elodie/caffe2/caffe2/core/operator.h:642:47:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::Tensor<Context>&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; Context = caffe2::CUDAContext; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/elodie/caffe2/caffe2/operators/elementwise_op.h:215:42:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::RunOnDevice() [with InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/tmp/tmpxft_00004713_00000000-5_abs_op.cudafe1.stub.c:20:27:   required from here
/usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’
       return __and_<is_convertible<_UElements&&, _Elements>...>::value;
                                                                 ^~~~~
/usr/include/c++/6/tuple:490:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<long unsigned int, long unsigned int, long unsigned int>}; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<long unsigned int, long unsigned int, long unsigned int>&; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’:
/usr/include/c++/6/tuple:662:419:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), long unsigned int&, long unsigned int&, long unsigned int&>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {long unsigned int, long unsigned int, long unsigned int}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), long unsigned int&, long unsigned int&, long unsigned int&>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> = <missing>]’
/home/elodie/caffe2/caffe2/operators/elementwise_op.h:241:26:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::DoRunWithType() [with T = float; InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/home/elodie/caffe2/caffe2/core/operator.h:640:80:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::TypeMeta&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/elodie/caffe2/caffe2/core/operator.h:642:47:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::Tensor<Context>&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; Context = caffe2::CUDAContext; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/elodie/caffe2/caffe2/operators/elementwise_op.h:215:42:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::RunOnDevice() [with InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/tmp/tmpxft_00004713_00000000-5_abs_op.cudafe1.stub.c:20:27:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<long unsigned int, long unsigned int, long unsigned int>&; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<long unsigned int, long unsigned int, long unsigned int>&&; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’:
/usr/include/c++/6/tuple:686:422:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), long unsigned int&, long unsigned int&, long unsigned int&>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {long unsigned int, long unsigned int, long unsigned int}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), long unsigned int&, long unsigned int&, long unsigned int&>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> = <missing>]’
/home/elodie/caffe2/caffe2/operators/elementwise_op.h:241:26:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::DoRunWithType() [with T = float; InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/home/elodie/caffe2/caffe2/core/operator.h:640:80:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::TypeMeta&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/elodie/caffe2/caffe2/core/operator.h:642:47:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::Tensor<Context>&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; Context = caffe2::CUDAContext; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/elodie/caffe2/caffe2/operators/elementwise_op.h:215:42:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::RunOnDevice() [with InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/tmp/tmpxft_00004713_00000000-5_abs_op.cudafe1.stub.c:20:27:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<long unsigned int, long unsigned int, long unsigned int>&&; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’ not a return-statement
     }
 ^
CMake Error at caffe2_gpu_generated_abs_op.cu.o.Release.cmake:278 (message):
  Error generating file
  /home/elodie/caffe2/build/caffe2/CMakeFiles/caffe2_gpu.dir/operators/./caffe2_gpu_generated_abs_op.cu.o

caffe2/CMakeFiles/caffe2_gpu.dir/build.make:1334: recipe for target 'caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_abs_op.cu.o' failed
make[2]: *** [caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_abs_op.cu.o] Error 1
CMakeFiles/Makefile2:1957: recipe for target 'caffe2/CMakeFiles/caffe2_gpu.dir/all' failed
make[1]: *** [caffe2/CMakeFiles/caffe2_gpu.dir/all] Error 2
Makefile:140: recipe for target 'all' failed
make: *** [all] Error 2
pjh5 commented 6 years ago

Hi @elcou sorry for the delay. Is there a particular reason why you need this flag -DCMAKE_CXX_COMPILER=g++-6 ? Can you try building without it?

elcou commented 6 years ago

No problem, and thanks for the reply. I did try without the flag but I get the same error. Is there a specific version of g++ that you recommend ?

MatthewInkawhich commented 6 years ago

Hi @elcou @pjh5. I too am experiencing this same build error.

System information

Operating system: Ubuntu 18.04 Compiler version: gcc 6.4 CMake version: 3.9.5 CMake arguments: NONE Relevant libraries/versions (e.g. CUDA): CUDA v9.1 && cuDNN v5.1

Has any progress been made on this issue? Or any recommendations?

pjh5 commented 6 years ago

I've never seen an error coming from a place like this before '/usr/include/c++/6/tuple' We haven't tested Caffe2 on Ubuntu 17 or 18. Do you happen to know if there's been any changes related to the default compilers since Ubuntu 16.04?

Do you also know if your CUDA setup is working? Do the included demos / samples work okay?

labor00 commented 6 years ago

I also have a similar problem to the one described here. I can use/compile cuda software. For instance I have built and run the Tensorflow package with GPU support.

System Information

Operating system: Fedora 27 Compiler version: gcc 6.4 CMake version: cmake version 3.10.1 CMake arguments: NONE Relevant libraries/versions (e.g. CUDA): CUDA v9.1 (Also tried with v9.0) && cuDNN v7.0.5

Error

/usr/lib64/gcc/x86_64-redhat-linux/6.4.0/include/c++/tuple: In instantiation of ‘static constexpr bool std::_TC<, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<long unsigned int, long unsigned int, long unsigned int>&; bool = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’: /usr/lib64/gcc/x86_64-redhat-linux/6.4.0/include/c++/tuple:662:419: required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), long unsigned int&, long unsigned int&, long unsigned int&>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type > constexpr std::tuple< >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {long unsigned int, long unsigned int, long unsigned int}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), long unsigned int&, long unsigned int&, long unsigned int&>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type = ]’ /build/caffe2/caffe2/caffe2/operators/elementwise_op.h:241:26: required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::DoRunWithType() [with T = float; InputTypes = caffe2::TensorTypes; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast; TypeMap = caffe2::SameTypeAsInput]’ /build/caffe2/caffe2/caffe2/core/operator.h:640:80: required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op, const caffe2::TypeMeta&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes, caffe2::CUDAContext, caffe2::WithoutBroadcast >; FirstType = float; Types = {}; ExtraArgs = {}]’ /build/caffe2/caffe2/caffe2/core/operator.h:642:47: required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op, const caffe2::Tensor&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes, caffe2::CUDAContext, caffe2::WithoutBroadcast >; Context = caffe2::CUDAContext; FirstType = float; Types = {}; ExtraArgs = {}]’ /build/caffe2/caffe2/caffe2/operators/elementwise_op.h:215:42: required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::RunOnDevice() [with InputTypes = caffe2::TensorTypes; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast; TypeMap = caffe2::SameTypeAsInput]’ /tmp/tmpxft_00007e69_00000000-5_abs_op.cudafe1.stub.c:20:27: required from here /usr/lib64/gcc/x86_64-redhat-linux/6.4.0/include/c++/tuple:495:244: error: wrong number of template arguments (4, should be 2) return and_<not_<is_same<tuple<_Elements...>, ^
/usr/lib64/gcc/x86_64-redhat-linux/6.4.0/include/c++/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’ struct is_convertible ^~~~~~ /usr/lib64/gcc/x86_64-redhat-linux/6.4.0/include/c++/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<long unsigned int, long unsigned int, long unsigned int>&; bool = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’ not a return-statement } ^ /usr/lib64/gcc/x86_64-redhat-linux/6.4.0/include/c++/tuple: In instantiation of ‘static constexpr bool std::_TC<, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<long unsigned int, long unsigned int, long unsigned int>&&; bool = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’: /usr/lib64/gcc/x86_64-redhat-linux/6.4.0/include/c++/tuple:686:422: required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), long unsigned int&, long unsigned int&, long unsigned int&>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type > constexpr std::tuple< >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {long unsigned int, long unsigned int, long unsigned int}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), long unsigned int&, long unsigned int&, long unsigned int&>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type = ]’ /build/caffe2/caffe2/caffe2/operators/elementwise_op.h:241:26: required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::DoRunWithType() [with T = float; InputTypes = caffe2::TensorTypes; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast; TypeMap = caffe2::SameTypeAsInput]’ /build/caffe2/caffe2/caffe2/core/operator.h:640:80: required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op, const caffe2::TypeMeta&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes, caffe2::CUDAContext, caffe2::WithoutBroadcast >; FirstType = float; Types = {}; ExtraArgs = {}]’ /build/caffe2/caffe2/caffe2/core/operator.h:642:47: required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op, const caffe2::Tensor&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes, caffe2::CUDAContext, caffe2::WithoutBroadcast >; Context = caffe2::CUDAContext; FirstType = float; Types = {}; ExtraArgs = {}]’ /build/caffe2/caffe2/caffe2/operators/elementwise_op.h:215:42: required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::RunOnDevice() [with InputTypes = caffe2::TensorTypes; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast; TypeMap = caffe2::SameTypeAsInput]’ /tmp/tmpxft_00007e69_00000000-5_abs_op.cudafe1.stub.c:20:27: required from here /usr/lib64/gcc/x86_64-redhat-linux/6.4.0/include/c++/tuple:495:244: error: wrong number of template arguments (4, should be 2) return
and_<
not_<is_same<tuple<_Elements...>, ^
/usr/lib64/gcc/x86_64-redhat-linux/6.4.0/include/c++/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’ struct is_convertible ^~~~~~ /usr/lib64/gcc/x86_64-redhat-linux/6.4.0/include/c++/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<long unsigned int, long unsigned int, long unsigned int>&&; bool = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’ not a return-statement } ^ CMake Error at caffe2_gpu_generated_abs_op.cu.o.Release.cmake:275 (message): Error generating file /build/caffe2/build/caffe2/CMakeFiles/caffe2_gpu.dir/operators/./caffe2_gpu_generated_abs_op.cu.o

make[2]: [caffe2/CMakeFiles/caffe2_gpu.dir/build.make:1351: caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_abs_op.cu.o] Error 1 make[1]: [CMakeFiles/Makefile2:1979: caffe2/CMakeFiles/caffe2_gpu.dir/all] Error 2 make: *** [Makefile:141: all] Error 2

elcou commented 6 years ago

@pjh5 Yes my cuda setup is working fine. The default compiler for Ubuntu 17 seems to be gcc-7. I am using gcc-6 because I read somewhere that cuda v9 required gcc-6. The problem obviously seems to be related with the compiler...but I don't know where else to look now.

MatthewInkawhich commented 6 years ago

@pjh5 My cuda seems to be working fine as well. I also experienced the same issue as @elcou and had to switch default compilers from gcc-7 to gcc-6 because I got an error stating that there was a compatibility issue with gcc-7 and cuda-9

pjh5 commented 6 years ago

@elcou Did using gcc-6 fix the issue for you or lead to a different error message?

@pietern Do you have an idea what is going on here?

pjh5 commented 6 years ago

Actually I see that this was addressed in #1636 which in turn links to https://devtalk.nvidia.com/default/topic/1028112/cuda-setup-and-installation/nvcc-bug-related-to-gcc-6-lt-tuple-gt-header-/

labor00 commented 6 years ago

Due to the bug described by @pjh5 on the last comment I have tried to use a older version of the GCC (5.5). Using this version, I was able to compile the package with gpu support (Cuda 9.1/Cudnn 7.0.5). However it was also necessary to compile the intel TBB library to avoid linking errors. Since the system version used on Fedora 27 was compiled with a newer GCC version.

Is possible to compile the cuda code of caffe2 using the clang compiler ?

pjh5 commented 6 years ago

@labor00 we do test Caffe2 with clang on Ubuntu 16.04. The code used to switch gcc to clang is here https://github.com/caffe2/caffe2/blob/master/docker/jenkins/common/install_clang.sh

anatlin commented 6 years ago

hey @elcou , did you manage to solve this?

teaglin commented 6 years ago

@elcou any update?

elcou commented 6 years ago

@anatlin @teaglin, I'm sorry, but some specifications changed in my project and I switched to using a different library instead of Caffe2 so I haven't had time to work on that.

iGoog commented 6 years ago

I'm running into a very similar issue. I'm using an Intel 8700k processor, which pretty much forced me onto a more up to date linux kernel. I've got a 1080ti which as far as I can tell NVIDIA isn't yet supporting with newer compilers / kernels. I had a stint into Manjaro Linux, then shifted to Ubuntu 18 beta. After reading up on these issues, I got gcc 6.3 going through a ppa (ppa:jonathonf/gcc-6.3)... Then installed the NVIDIA stuff... Anyhow, I'm up for trying stuff if anyone has suggestions.

cmake -DUSE_CUDA=ON -DBLAS=MKL -DCUDA_HOST_COMPILER=/usr/bin/gcc-6 -DCMAKE_C_COMPILER=gcc-6 -DCMAKE_CXX_COMPILER=g++-6 ..
-- The CXX compiler identification is GNU 6.3.0
-- The C compiler identification is GNU 6.3.0
-- Check for working CXX compiler: /usr/bin/g++-6
-- Check for working CXX compiler: /usr/bin/g++-6 -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Check for working C compiler: /usr/bin/gcc-6
-- Check for working C compiler: /usr/bin/gcc-6 -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Performing Test CAFFE2_LONG_IS_INT32_OR_64
-- Performing Test CAFFE2_LONG_IS_INT32_OR_64 - Success
-- Does not need to define long separately.
-- Performing Test CAFFE2_EXCEPTION_PTR_SUPPORTED
-- Performing Test CAFFE2_EXCEPTION_PTR_SUPPORTED - Success
-- std::exception_ptr is supported.
-- Performing Test CAFFE2_IS_NUMA_AVAILABLE
-- Performing Test CAFFE2_IS_NUMA_AVAILABLE - Success
-- NUMA is available
-- Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING
-- Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING - Success
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX2_EXTENSIONS
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX2_EXTENSIONS - Success
-- Current compiler supports avx2 extention. Will build perfkernels.
-- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY
-- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY - Success
-- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY
-- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY - Success
-- Build type not set - defaulting to Release
-- Building using own protobuf under third_party per request.
-- Use custom protobuf build.
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE  
-- Caffe2 protobuf include directory: $<BUILD_INTERFACE:/home/paul/Documents/code/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include>
-- Found Git: /usr/bin/git (found version "2.17.0") 
-- The BLAS backend of choice:MKL
-- Found MKL: /opt/intel/mkl/include  
-- Found MKL (include: /opt/intel/mkl/include, lib: /usr/local/lib/libmkl_rt.so
-- Brace yourself, we are building NNPACK
-- The ASM compiler identification is GNU
-- Found assembler: /usr/bin/gcc-6
-- Found PythonInterp: /usr/bin/python (found version "2.7.14") 
-- Check if compiler accepts -pthread
-- Check if compiler accepts -pthread - yes
-- Caffe2: Found gflags with new-style gflags target.
-- Caffe2: Cannot find glog automatically. Using legacy find.
-- Found glog: /usr/include  
-- Caffe2: Found glog (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libglog.so)
-- git Version: v0.0.0
-- Version: 0.0.0
-- Performing Test HAVE_CXX_FLAG_STD_CXX11
-- Performing Test HAVE_CXX_FLAG_STD_CXX11 - Success
-- Performing Test HAVE_CXX_FLAG_WALL
-- Performing Test HAVE_CXX_FLAG_WALL - Success
-- Performing Test HAVE_CXX_FLAG_WEXTRA
-- Performing Test HAVE_CXX_FLAG_WEXTRA - Success
-- Performing Test HAVE_CXX_FLAG_WSHADOW
-- Performing Test HAVE_CXX_FLAG_WSHADOW - Success
-- Performing Test HAVE_CXX_FLAG_WERROR
-- Performing Test HAVE_CXX_FLAG_WERROR - Success
-- Performing Test HAVE_CXX_FLAG_PEDANTIC
-- Performing Test HAVE_CXX_FLAG_PEDANTIC - Success
-- Performing Test HAVE_CXX_FLAG_PEDANTIC_ERRORS
-- Performing Test HAVE_CXX_FLAG_PEDANTIC_ERRORS - Success
-- Performing Test HAVE_CXX_FLAG_WSHORTEN_64_TO_32
-- Performing Test HAVE_CXX_FLAG_WSHORTEN_64_TO_32 - Failed
-- Performing Test HAVE_CXX_FLAG_WFLOAT_EQUAL
-- Performing Test HAVE_CXX_FLAG_WFLOAT_EQUAL - Success
-- Performing Test HAVE_CXX_FLAG_FSTRICT_ALIASING
-- Performing Test HAVE_CXX_FLAG_FSTRICT_ALIASING - Success
-- Performing Test HAVE_CXX_FLAG_WZERO_AS_NULL_POINTER_CONSTANT
-- Performing Test HAVE_CXX_FLAG_WZERO_AS_NULL_POINTER_CONSTANT - Success
-- Performing Test HAVE_CXX_FLAG_WSTRICT_ALIASING
-- Performing Test HAVE_CXX_FLAG_WSTRICT_ALIASING - Success
-- Performing Test HAVE_CXX_FLAG_WD654
-- Performing Test HAVE_CXX_FLAG_WD654 - Failed
-- Performing Test HAVE_CXX_FLAG_WTHREAD_SAFETY
-- Performing Test HAVE_CXX_FLAG_WTHREAD_SAFETY - Failed
-- Performing Test HAVE_CXX_FLAG_COVERAGE
-- Performing Test HAVE_CXX_FLAG_COVERAGE - Success
-- Performing Test HAVE_STD_REGEX
-- Performing Test HAVE_STD_REGEX
-- Performing Test HAVE_STD_REGEX -- success
-- Performing Test HAVE_GNU_POSIX_REGEX
-- Performing Test HAVE_GNU_POSIX_REGEX
-- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile
-- Performing Test HAVE_POSIX_REGEX
-- Performing Test HAVE_POSIX_REGEX
-- Performing Test HAVE_POSIX_REGEX -- success
-- Performing Test HAVE_STEADY_CLOCK
-- Performing Test HAVE_STEADY_CLOCK
-- Performing Test HAVE_STEADY_CLOCK -- success
-- Found LMDB: /usr/include  
-- Found lmdb    (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/liblmdb.so)
-- Found LevelDB: /usr/include  
-- Found LevelDB (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libleveldb.so)
-- Found Snappy: /usr/include  
-- Found Snappy  (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libsnappy.so)
-- Found Numa: /usr/include  
-- Found Numa  (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libnuma.so)
-- OpenCV found (/usr/share/OpenCV)
CMake Warning at cmake/Dependencies.cmake:280 (find_package):
  By not providing "FindEigen3.cmake" in CMAKE_MODULE_PATH this project has
  asked CMake to find a package configuration file provided by "Eigen3", but
  CMake did not find one.

  Could not find a package configuration file provided by "Eigen3" with any
  of the following names:

    Eigen3Config.cmake
    eigen3-config.cmake

  Add the installation prefix of "Eigen3" to CMAKE_PREFIX_PATH or set
  "Eigen3_DIR" to a directory containing one of the above files.  If "Eigen3"
  provides a separate development package or SDK, be sure it has been
  installed.
Call Stack (most recent call first):
  CMakeLists.txt:103 (include)

-- Did not find system Eigen. Using third party subdirectory.
-- Found PythonInterp: /usr/bin/python (found suitable version "2.7.14", minimum required is "2.7") 
-- Found PythonLibs: /usr/lib/x86_64-linux-gnu/libpython2.7.so (found suitable version "2.7.14+", minimum required is "2.7") 
-- Found NumPy: /usr/local/lib/python2.7/dist-packages/numpy/core/include (found version "1.13.3.10") 
-- NumPy ver. 1.13.3.10 found (include: /usr/local/lib/python2.7/dist-packages/numpy/core/include)
-- Could NOT find pybind11 (missing: pybind11_INCLUDE_DIR) 
-- Found MPI_C: /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi.so (found version "3.1") 
-- Found MPI_CXX: /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi_cxx.so (found version "3.1") 
-- Found MPI: TRUE (found version "3.1")  
-- MPI support found
-- MPI compile flags: -pthread
-- MPI include path: /usr/lib/x86_64-linux-gnu/openmpi/include/openmpi/usr/lib/x86_64-linux-gnu/openmpi/include/openmpi/opal/mca/event/libevent2022/libevent/usr/lib/x86_64-linux-gnu/openmpi/include/openmpi/opal/mca/event/libevent2022/libevent/include/usr/lib/x86_64-linux-gnu/openmpi/include
-- MPI LINK flags path: -pthread
-- MPI libraries: /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi_cxx.so/usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi.so
CMake Warning at cmake/Dependencies.cmake:334 (message):
  OpenMPI found, but it is not built with CUDA support.
Call Stack (most recent call first):
  CMakeLists.txt:103 (include)

-- Found CUDA: /usr/local/cuda-9.1 (found suitable version "9.1", minimum required is "7.0") 
-- Found CUDNN: /usr/local/cuda-9.1/include  
-- Caffe2: CUDA detected: 9.1
-- Found cuDNN: v7.1.2  (include: /usr/local/cuda-9.1/include, library: /usr/local/cuda-9.1/lib64/libcudnn.so)
-- Automatic GPU detection returned 6.1.
-- Added CUDA NVCC flags for: sm_61
-- Could NOT find NCCL (missing: NCCL_INCLUDE_DIRS NCCL_LIBRARIES) 
-- Could NOT find CUB (missing: CUB_INCLUDE_DIR) 
-- Could NOT find Gloo (missing: Gloo_INCLUDE_DIR Gloo_LIBRARY) 
-- MPI include path: /usr/lib/x86_64-linux-gnu/openmpi/include/openmpi/usr/lib/x86_64-linux-gnu/openmpi/include/openmpi/opal/mca/event/libevent2022/libevent/usr/lib/x86_64-linux-gnu/openmpi/include/openmpi/opal/mca/event/libevent2022/libevent/include/usr/lib/x86_64-linux-gnu/openmpi/include
-- MPI libraries: /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi_cxx.so/usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi.so
-- CUDA detected: 9.1
-- Found libcuda: /usr/lib/x86_64-linux-gnu/libcuda.so
-- Found libnvrtc: /usr/local/cuda-9.1/lib64/libnvrtc.so
-- Found nccl: /home/paul/Documents/code/pytorch/third_party/nccl/build/include  
CMake Warning at cmake/Dependencies.cmake:467 (message):
  mobile opengl is only used in android or ios builds.
Call Stack (most recent call first):
  CMakeLists.txt:103 (include)

CMake Warning at cmake/Dependencies.cmake:543 (message):
  Metal is only used in ios builds.
Call Stack (most recent call first):
  CMakeLists.txt:103 (include)

-- GCC 6.3.0: Adding gcc and gcc_s libs to link line
-- Include NCCL operators
-- Including image processing operators
-- Excluding video processing operators due to no opencv
-- Including MKL operators
-- Include Observer library
-- Using lib/python2.7/dist-packages as python relative installation path
-- Automatically generating missing __init__.py files.
CMake Warning at CMakeLists.txt:222 (message):
  Generated cmake files are only fully tested if one builds with system glog,
  gflags, and protobuf.  Other settings may generate files that are not well
  tested.

-- 
-- ******** Summary ********
-- General:
--   CMake version         : 3.10.2
--   CMake command         : /usr/bin/cmake
--   Git version           : v0.1.11-7763-g0dff2b5e3
--   System                : Linux
--   C++ compiler          : /usr/bin/g++-6
--   C++ compiler version  : 6.3.0
--   BLAS                  : MKL
--   CXX flags             :  -fvisibility-inlines-hidden -DONNX_NAMESPACE=onnx_c2 -O2 -fPIC -Wno-narrowing -Wno-invalid-partial-specialization
--   Build type            : Release
--   Compile definitions   : 
-- 
--   BUILD_BINARY          : ON
--   BUILD_CUSTOM_PROTOBUF : ON
--     Link local protobuf : ON
--   BUILD_DOCS            : OFF
--   BUILD_PYTHON          : ON
--     Python version      : 2.7.14+
--     Python includes     : /usr/include/python2.7
--   BUILD_SHARED_LIBS     : ON
--   BUILD_TEST            : ON
--   USE_ATEN              : OFF
--   USE_ASAN              : OFF
--   USE_CUDA              : ON
--     CUDA version        : 9.1
--     CuDNN version       : 7.1.2
--     CUDA root directory : /usr/local/cuda-9.1
--     CUDA library        : /usr/lib/x86_64-linux-gnu/libcuda.so
--     CUDA NVRTC library  : /usr/local/cuda-9.1/lib64/libnvrtc.so
--     CUDA runtime library: /usr/local/cuda-9.1/lib64/libcudart.so
--     CUDA include path   : /usr/local/cuda-9.1/include
--     NVCC executable     : /usr/local/cuda-9.1/bin/nvcc
--     CUDA host compiler  : /usr/bin/gcc-6
--   USE_EIGEN_FOR_BLAS    : 
--   USE_FFMPEG            : OFF
--   USE_GFLAGS            : ON
--   USE_GLOG              : ON
--   USE_GLOO              : ON
--   USE_LEVELDB           : ON
--     LevelDB version     : 1.20
--     Snappy version      : ..
--   USE_LITE_PROTO        : OFF
--   USE_LMDB              : ON
--     LMDB version        : 0.9.21
--   USE_METAL             : OFF
--   USE_MKL               : 1
--   USE_MOBILE_OPENGL     : OFF
--   USE_MPI               : ON
--   USE_NCCL              : ON
--   USE_NERVANA_GPU       : OFF
--   USE_NNPACK            : ON
--   USE_OBSERVERS         : ON
--   USE_OPENCV            : ON
--     OpenCV version      : 3.2.0
--   USE_OPENMP            : OFF
--   USE_PROF              : OFF
--   USE_REDIS             : OFF
--   USE_ROCKSDB           : OFF
--   USE_ZMQ               : OFF
-- Configuring done
-- Generating done
-- Build files have been written to: /home/paul/Documents/code/pytorch/build

....
[ 65%] Linking CXX shared library ../lib/libcaffe2.so
[ 65%] Built target caffe2
[ 65%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/utils/caffe2_gpu_generated_math_gpu.cu.o
/home/paul/Documents/code/pytorch/caffe2/utils/math_gpu.cu(1119): warning: function "caffe2::math::<unnamed>::FloatTransform<T>::operator() [with T=caffe2::float16]" was declared but never referenced

/home/paul/Documents/code/pytorch/caffe2/utils/math_gpu.cu(1152): warning: function "caffe2::math::<unnamed>::SqrTransform<T>::operator() [with T=float]" was declared but never referenced

[ 65%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/core/caffe2_gpu_generated_context_gpu.cu.o
[ 65%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/image/caffe2_gpu_generated_transform_gpu.cu.o
[ 65%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_abs_op.cu.o
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<long unsigned int&, long unsigned int&, long unsigned int&>}; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’:
/usr/include/c++/6/tuple:626:248:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), long unsigned int&, long unsigned int&, long unsigned int&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<long unsigned int&, long unsigned int&, long unsigned int&>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), long unsigned int&, long unsigned int&, long unsigned int&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]’
/usr/include/c++/6/tuple:1545:43:   required from ‘constexpr std::tuple<_Elements& ...> std::tie(_Elements& ...) [with _Elements = {long unsigned int, long unsigned int, long unsigned int}]’
/home/paul/Documents/code/pytorch/caffe2/operators/elementwise_op.h:226:10:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::DoRunWithType() [with T = float; InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/home/paul/Documents/code/pytorch/caffe2/core/operator.h:674:80:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::TypeMeta&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/paul/Documents/code/pytorch/caffe2/core/operator.h:676:47:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::Tensor<Context>&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; Context = caffe2::CUDAContext; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/paul/Documents/code/pytorch/caffe2/operators/elementwise_op.h:200:42:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::RunOnDevice() [with InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/tmp/tmpxft_00007f6f_00000000-5_abs_op.cudafe1.stub.c:20:27:   required from here
/usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’
       return __and_<is_constructible<_Elements, _UElements&&>...>::value;
                                                                   ^~~~~
/usr/include/c++/6/tuple:484:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<long unsigned int&, long unsigned int&, long unsigned int&>}; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<long unsigned int&, long unsigned int&, long unsigned int&>}; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’:
/usr/include/c++/6/tuple:626:362:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), long unsigned int&, long unsigned int&, long unsigned int&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<long unsigned int&, long unsigned int&, long unsigned int&>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), long unsigned int&, long unsigned int&, long unsigned int&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]’
/usr/include/c++/6/tuple:1545:43:   required from ‘constexpr std::tuple<_Elements& ...> std::tie(_Elements& ...) [with _Elements = {long unsigned int, long unsigned int, long unsigned int}]’
/home/paul/Documents/code/pytorch/caffe2/operators/elementwise_op.h:226:10:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::DoRunWithType() [with T = float; InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/home/paul/Documents/code/pytorch/caffe2/core/operator.h:674:80:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::TypeMeta&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/paul/Documents/code/pytorch/caffe2/core/operator.h:676:47:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::Tensor<Context>&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; Context = caffe2::CUDAContext; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/paul/Documents/code/pytorch/caffe2/operators/elementwise_op.h:200:42:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::RunOnDevice() [with InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/tmp/tmpxft_00007f6f_00000000-5_abs_op.cudafe1.stub.c:20:27:   required from here
/usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’
       return __and_<is_convertible<_UElements&&, _Elements>...>::value;
                                                                 ^~~~~
/usr/include/c++/6/tuple:490:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<long unsigned int&, long unsigned int&, long unsigned int&>}; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<long unsigned int&, long unsigned int&, long unsigned int&>&; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’:
/usr/include/c++/6/tuple:662:419:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), long unsigned int&, long unsigned int&, long unsigned int&>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {long unsigned int&, long unsigned int&, long unsigned int&}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), long unsigned int&, long unsigned int&, long unsigned int&>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> = <missing>]’
/usr/include/c++/6/tuple:1545:43:   required from ‘constexpr std::tuple<_Elements& ...> std::tie(_Elements& ...) [with _Elements = {long unsigned int, long unsigned int, long unsigned int}]’
/home/paul/Documents/code/pytorch/caffe2/operators/elementwise_op.h:226:10:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::DoRunWithType() [with T = float; InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/home/paul/Documents/code/pytorch/caffe2/core/operator.h:674:80:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::TypeMeta&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/paul/Documents/code/pytorch/caffe2/core/operator.h:676:47:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::Tensor<Context>&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; Context = caffe2::CUDAContext; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/paul/Documents/code/pytorch/caffe2/operators/elementwise_op.h:200:42:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::RunOnDevice() [with InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/tmp/tmpxft_00007f6f_00000000-5_abs_op.cudafe1.stub.c:20:27:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<long unsigned int&, long unsigned int&, long unsigned int&>&; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<long unsigned int&, long unsigned int&, long unsigned int&>&&; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’:
/usr/include/c++/6/tuple:686:422:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), long unsigned int&, long unsigned int&, long unsigned int&>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {long unsigned int&, long unsigned int&, long unsigned int&}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), long unsigned int&, long unsigned int&, long unsigned int&>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> = <missing>]’
/usr/include/c++/6/tuple:1545:43:   required from ‘constexpr std::tuple<_Elements& ...> std::tie(_Elements& ...) [with _Elements = {long unsigned int, long unsigned int, long unsigned int}]’
/home/paul/Documents/code/pytorch/caffe2/operators/elementwise_op.h:226:10:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::DoRunWithType() [with T = float; InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/home/paul/Documents/code/pytorch/caffe2/core/operator.h:674:80:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::TypeMeta&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/paul/Documents/code/pytorch/caffe2/core/operator.h:676:47:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::Tensor<Context>&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; Context = caffe2::CUDAContext; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/paul/Documents/code/pytorch/caffe2/operators/elementwise_op.h:200:42:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::RunOnDevice() [with InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/tmp/tmpxft_00007f6f_00000000-5_abs_op.cudafe1.stub.c:20:27:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<long unsigned int&, long unsigned int&, long unsigned int&>&&; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<long unsigned int, long unsigned int, long unsigned int>}; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’:
/usr/include/c++/6/tuple:626:248:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), long unsigned int&, long unsigned int&, long unsigned int&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<long unsigned int, long unsigned int, long unsigned int>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), long unsigned int&, long unsigned int&, long unsigned int&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]’
/home/paul/Documents/code/pytorch/caffe2/operators/elementwise_op.h:226:26:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::DoRunWithType() [with T = float; InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/home/paul/Documents/code/pytorch/caffe2/core/operator.h:674:80:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::TypeMeta&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/paul/Documents/code/pytorch/caffe2/core/operator.h:676:47:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::Tensor<Context>&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; Context = caffe2::CUDAContext; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/paul/Documents/code/pytorch/caffe2/operators/elementwise_op.h:200:42:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::RunOnDevice() [with InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/tmp/tmpxft_00007f6f_00000000-5_abs_op.cudafe1.stub.c:20:27:   required from here
/usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’
       return __and_<is_constructible<_Elements, _UElements&&>...>::value;
                                                                   ^~~~~
/usr/include/c++/6/tuple:484:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<long unsigned int, long unsigned int, long unsigned int>}; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<long unsigned int, long unsigned int, long unsigned int>}; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’:
/usr/include/c++/6/tuple:626:362:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), long unsigned int&, long unsigned int&, long unsigned int&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<long unsigned int, long unsigned int, long unsigned int>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), long unsigned int&, long unsigned int&, long unsigned int&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]’
/home/paul/Documents/code/pytorch/caffe2/operators/elementwise_op.h:226:26:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::DoRunWithType() [with T = float; InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/home/paul/Documents/code/pytorch/caffe2/core/operator.h:674:80:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::TypeMeta&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/paul/Documents/code/pytorch/caffe2/core/operator.h:676:47:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::Tensor<Context>&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; Context = caffe2::CUDAContext; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/paul/Documents/code/pytorch/caffe2/operators/elementwise_op.h:200:42:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::RunOnDevice() [with InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/tmp/tmpxft_00007f6f_00000000-5_abs_op.cudafe1.stub.c:20:27:   required from here
/usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’
       return __and_<is_convertible<_UElements&&, _Elements>...>::value;
                                                                 ^~~~~
/usr/include/c++/6/tuple:490:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<long unsigned int, long unsigned int, long unsigned int>}; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<long unsigned int, long unsigned int, long unsigned int>&; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’:
/usr/include/c++/6/tuple:662:419:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), long unsigned int&, long unsigned int&, long unsigned int&>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {long unsigned int, long unsigned int, long unsigned int}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), long unsigned int&, long unsigned int&, long unsigned int&>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> = <missing>]’
/home/paul/Documents/code/pytorch/caffe2/operators/elementwise_op.h:226:26:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::DoRunWithType() [with T = float; InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/home/paul/Documents/code/pytorch/caffe2/core/operator.h:674:80:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::TypeMeta&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/paul/Documents/code/pytorch/caffe2/core/operator.h:676:47:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::Tensor<Context>&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; Context = caffe2::CUDAContext; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/paul/Documents/code/pytorch/caffe2/operators/elementwise_op.h:200:42:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::RunOnDevice() [with InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/tmp/tmpxft_00007f6f_00000000-5_abs_op.cudafe1.stub.c:20:27:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<long unsigned int, long unsigned int, long unsigned int>&; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<long unsigned int, long unsigned int, long unsigned int>&&; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’:
/usr/include/c++/6/tuple:686:422:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), long unsigned int&, long unsigned int&, long unsigned int&>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {long unsigned int, long unsigned int, long unsigned int}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), long unsigned int&, long unsigned int&, long unsigned int&>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> = <missing>]’
/home/paul/Documents/code/pytorch/caffe2/operators/elementwise_op.h:226:26:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::DoRunWithType() [with T = float; InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/home/paul/Documents/code/pytorch/caffe2/core/operator.h:674:80:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::TypeMeta&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/paul/Documents/code/pytorch/caffe2/core/operator.h:676:47:   required from ‘static bool caffe2::DispatchHelper<caffe2::TensorTypes<FirstType, Types ...>, ExtraArgs ...>::call(Op*, const caffe2::Tensor<Context>&) [with Op = caffe2::BinaryElementwiseOp<caffe2::TensorTypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor> >; Context = caffe2::CUDAContext; FirstType = float; Types = {}; ExtraArgs = {}]’
/home/paul/Documents/code/pytorch/caffe2/operators/elementwise_op.h:200:42:   required from ‘bool caffe2::BinaryElementwiseOp<InputTypes, Context, Functor, TypeMap>::RunOnDevice() [with InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::AbsGradientCUDAFunctor>; TypeMap = caffe2::SameTypeAsInput]’
/tmp/tmpxft_00007f6f_00000000-5_abs_op.cudafe1.stub.c:20:27:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<long unsigned int, long unsigned int, long unsigned int>&&; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’ not a return-statement
     }
 ^
CMake Error at caffe2_gpu_generated_abs_op.cu.o.Release.cmake:275 (message):
  Error generating file
  /home/paul/Documents/code/pytorch/build/caffe2/CMakeFiles/caffe2_gpu.dir/operators/./caffe2_gpu_generated_abs_op.cu.o

caffe2/CMakeFiles/caffe2_gpu.dir/build.make:77: recipe for target 'caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_abs_op.cu.o' failed
make[2]: *** [caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_abs_op.cu.o] Error 1
CMakeFiles/Makefile2:2585: recipe for target 'caffe2/CMakeFiles/caffe2_gpu.dir/all' failed
make[1]: *** [caffe2/CMakeFiles/caffe2_gpu.dir/all] Error 2
Makefile:140: recipe for target 'all' failed
make: *** [all] Error 2
pjh5 commented 6 years ago

@iGoog could you try gcc 5 based on #1636 which in turn links to https://devtalk.nvidia.com/default/topic/1028112/cuda-setup-and-installation/nvcc-bug-related-to-gcc-6-lt-tuple-gt-header-/ ?

iGoog commented 6 years ago

cmake -DUSE_CUDA=ON -DBLAS=MKL -DCUDA_HOST_COMPILER=/usr/bin/gcc-5 .. then sudo make install got it to compile fully (I did set up the NVIDIA stuff with gcc 6.3)... I'm not if installing gcc 6, much less 6.3 was neccesary.

The install part failed... Arguably an even further different issue, possibly my pretty fresh Ubuntu 18 setup (first tried conda, but that didn't work so I uninstalled that and went to pip... Installing libboost-all-dev seemed to help). I can't seem to do the "from caffe2.python import core" outside of the build directory. Having "PYTHONPATH=/usr/local:/usr/bin/python/../.. " doesn't seem to help. It might be my pip itself is messed up, as it pretty much demands I use sudo for anything to install.

paul@Picayune-beaver:~/Documents/code/pytorch/build$ python -c 'from caffe2.python import core' 2>/dev/null && echo "Success" || echo "Failure"
Success
paul@Picayune-beaver:~/Documents/code/pytorch/build$ python caffe2/python/operator_test/relu_op_test.py
WARNING:root:This caffe2 python run does not have GPU support. Will run in CPU only mode.
WARNING:root:Debug message: libcaffe2_gpu.so: cannot open shared object file: No such file or directory
CRITICAL:root:Cannot load caffe2.python. Error: libcaffe2.so: cannot open shared object file: No such file or directory
paul@Picayune-beaver:~/Documents/code$ cd ~ && python -c 'from caffe2.python import core' 2>/dev/null && echo "Success" || echo "Failure"
Failure
paul@Picayune-beaver:~/Documents/code$ echo $PYTHONPATH
/usr/local:/usr/bin/python/../..
paul@Picayune-beaver:~/Documents/code$ which pip
/usr/bin/pip
paul@Picayune-beaver:~/Documents/code$ which python
/usr/bin/python
paul@Picayune-beaver:~/Documents/code$ python --version
Python 2.7.14+
paul@Picayune-beaver:~/Documents/code$ pip --version
pip 9.0.3 from /home/paul/.local/lib/python2.7/site-packages (python 2.7)
paul@Picayune-beaver:~/Documents/code$ echo $PATH
/usr/local/cuda-9.1/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
paul@Picayune-beaver:~/Documents/code$ echo $LD_LIBRARY_PATH
/usr/local/cuda-9.1/lib64
paul@Picayune-beaver:~/Documents/code/pytorch/build$ pip list
DEPRECATION: The default format will switch to columns in the future. You can use --format=(legacy|columns) (or define a format=(legacy|columns) in your pip.conf under the [list] section) to disable this warning.
attrs (17.4.0)
backports-abc (0.5)
backports.functools-lru-cache (1.5)
backports.shutil-get-terminal-size (1.0.0)
bleach (2.1.3)
certifi (2018.1.18)
chardet (3.0.4)
configparser (3.5.0)
coverage (4.5.1)
cycler (0.10.0)
decorator (4.2.1)
entrypoints (0.2.3)
enum34 (1.1.6)
functools32 (3.2.3.post2)
future (0.16.0)
futures (3.2.0)
graphviz (0.8.1)
html5lib (1.0.1)
hypothesis (3.55.3)
icc-rt (16.0.3)
idna (2.6)
intel-numpy (1.13.3.10)
intel-openmp (2018.0.0)
ipykernel (4.8.2)
ipython (5.6.0)
ipython-genutils (0.2.0)
ipywidgets (7.2.1)
Jinja2 (2.10)
jsonschema (2.6.0)
jupyter (1.0.0)
jupyter-client (5.2.3)
jupyter-console (5.2.0)
jupyter-core (4.4.0)
kiwisolver (1.0.1)
MarkupSafe (1.0)
matplotlib (2.2.2)
mistune (0.8.3)
mkl (2018.0.0)
mkl-fft (1.0.0.17)
mkl-random (1.0.0.8)
mxnet-cu91mkl (1.1.0)
nbconvert (5.3.1)
nbformat (4.4.0)
networkx (2.1)
notebook (5.4.1)
numpy (1.13.3)
pandocfilters (1.4.2)
pathlib2 (2.3.0)
pexpect (4.4.0)
pickleshare (0.7.4)
Pillow (5.1.0)
pip (9.0.3)
prompt-toolkit (1.0.15)
protobuf (3.5.2.post1)
ptyprocess (0.5.2)
pydot (1.2.4)
Pygments (2.2.0)
pyparsing (2.2.0)
python-apt (1.6.0rc2+ubuntu1)
python-dateutil (2.7.2)
python-nvd3 (0.15.0)
python-slugify (1.2.5)
pytz (2018.4)
PyWavelets (0.5.2)
PyYAML (3.12)
pyzmq (17.0.0)
qtconsole (4.3.1)
requests (2.18.4)
scandir (1.7)
scikit-image (0.13.1)
scipy (1.0.1)
Send2Trash (1.5.0)
setuptools (39.0.1)
simplegeneric (0.8.1)
singledispatch (3.4.0.3)
six (1.11.0)
subprocess32 (3.2.7)
terminado (0.8.1)
testpath (0.3.1)
tornado (5.0.2)
traitlets (4.3.2)
Unidecode (1.0.22)
urllib3 (1.22)
wcwidth (0.1.7)
webencodings (0.5.1)
widgetsnbextension (3.2.1)

paul@Picayune-beaver:~/Documents/code/pytorch/build$ aptitude search '~i python'
i   dh-python                                           - Debian helper tools for packaging Python libraries and applic
i A libboost-mpi-python-dev                             - C++ interface to the Message Passing Interface (MPI), Python 
i A libboost-mpi-python1.65-dev                         - C++ interface to the Message Passing Interface (MPI), Python 
i A libboost-mpi-python1.65.1                           - C++ interface to the Message Passing Interface (MPI), Python 
i A libboost-python-dev                                 - Boost.Python Library development files (default version)     
i A libboost-python1.65-dev                             - Boost.Python Library development files                       
i A libboost-python1.65.1                               - Boost.Python Library                                         
i A libpython-dev                                       - header files and a static library for Python (default)       
i A libpython-stdlib                                    - interactive high-level object-oriented language (default pyth
i   libpython2.7                                        - Shared Python runtime library (version 2.7)                  
i A libpython2.7-dev                                    - Header files and a static library for Python (v2.7)          
i   libpython2.7-minimal                                - Minimal subset of the Python language (version 2.7)          
i   libpython2.7-stdlib                                 - Interactive high-level object-oriented language (standard lib
i A libpython3-dev                                      - header files and a static library for Python (default)       
i   libpython3-stdlib                                   - interactive high-level object-oriented language (default pyth
i   libpython3.6                                        - Shared Python runtime library (version 3.6)                  
i A libpython3.6-dev                                    - Header files and a static library for Python (v3.6)          
i   libpython3.6-minimal                                - Minimal subset of the Python language (version 3.6)          
i   libpython3.6-stdlib                                 - Interactive high-level object-oriented language (standard lib
i A python                                              - interactive high-level object-oriented language (default vers
i A python-apt                                          - Python interface to libapt-pkg                               
i   python-apt-common                                   - Python interface to libapt-pkg (locales)                     
i   python-dev                                          - header files and a static library for Python (default)       
i A python-minimal                                      - minimal subset of the Python language (default version)      
i   python-pip                                          - Python package installer                                     
i A python-pip-whl                                      - Python package installer                                     
i   python-talloc                                       - hierarchical pool based memory allocator - Python bindings   
i A python2.7                                           - Interactive high-level object-oriented language (version 2.7)
i A python2.7-dev                                       - Header files and a static library for Python (v2.7)          
i A python2.7-minimal                                   - Minimal subset of the Python language (version 2.7)          
i   python3                                             - interactive high-level object-oriented language (default pyth
i   python3-apport                                      - Python 3 library for Apport crash report handling            
i   python3-apt                                         - Python 3 interface to libapt-pkg                             
i   python3-aptdaemon                                   - Python 3 module for the server and client of aptdaemon       
i   python3-aptdaemon.gtk3widgets                       - Python 3 GTK+ 3 widgets to run an aptdaemon client           
i   python3-asn1crypto                                  - Fast ASN.1 parser and serializer (Python 3)                  
i   python3-brlapi                                      - Braille display access via BRLTTY - Python3 bindings         
i   python3-cairo                                       - Python3 bindings for the Cairo vector graphics library       
i   python3-certifi                                     - root certificates for validating SSL certs and verifying TLS 
i   python3-cffi-backend                                - Foreign Function Interface for Python 3 calling C code - runt
i   python3-chardet                                     - universal character encoding detector for Python3            
i   python3-commandnotfound                             - Python 3 bindings for command-not-found.                     
i   python3-crypto                                      - cryptographic algorithms and protocols for Python 3          
i   python3-cryptography                                - Python library exposing cryptographic recipes and primitives 
i   python3-cups                                        - Python3 bindings for CUPS                                    
i   python3-cupshelpers                                 - Python utility modules around the CUPS printing system       
i   python3-dbus                                        - simple interprocess messaging system (Python 3 interface)    
i   python3-debconf                                     - interact with debconf from Python 3                          
i   python3-debian                                      - Python 3 modules to work with Debian-related data formats    
i   python3-defer                                       - Small framework for asynchronous programming (Python 3)      
i A python3-dev                                         - header files and a static library for Python (default)       
i   python3-distro-info                                 - information about distributions' releases (Python 3 module)  
i   python3-distupgrade                                 - manage release upgrades                                      
i   python3-distutils                                   - distutils package for Python 3.x                             
i   python3-gdbm                                        - GNU dbm database support for Python 3.x                      
i   python3-gi                                          - Python 3 bindings for gobject-introspection libraries        
i   python3-gi-cairo                                    - Python 3 Cairo bindings for the GObject library              
i   python3-httplib2                                    - comprehensive HTTP client library written for Python3        
i   python3-idna                                        - Python IDNA2008 (RFC 5891) handling (Python 3)               
i   python3-keyring                                     - store and access your passwords safely - Python 3 version of 
i   python3-keyrings.alt                                - alternate backend implementations for python3-keyring        
i   python3-launchpadlib                                - Launchpad web services client library (Python 3)             
i   python3-lazr.restfulclient                          - client for lazr.restful-based web services (Python 3)        
i   python3-lazr.uri                                    - library for parsing, manipulating, and generating URIs       
i   python3-lib2to3                                     - Interactive high-level object-oriented language (2to3, versio
i   python3-louis                                       - Python bindings for liblouis                                 
i   python3-macaroonbakery                              - Higher-level macaroon operations for Python 3                
i   python3-mako                                        - fast and lightweight templating for the Python 3 platform    
i   python3-markupsafe                                  - HTML/XHTML/XML string library for Python 3                   
i   python3-minimal                                     - minimal subset of the Python language (default python3 versio
i   python3-nacl                                        - Python bindings to libsodium (Python 3)                      
i   python3-oauth                                       - Python 3 library implementing of the OAuth protocol          
i A python3-olefile                                     - Python module to read/write MS OLE2 files                    
i   python3-pexpect                                     - Python 3 module for automating interactive applications      
i   python3-pil                                         - Python Imaging Library (Python3)                             
i   python3-pkg-resources                               - Package Discovery and Resource Access using pkg_resources    
i   python3-problem-report                              - Python 3 library to handle problem reports                   
i   python3-protobuf                                    - Python 3 bindings for protocol buffers                       
i   python3-ptyprocess                                  - Run a subprocess in a pseudo terminal from Python 3          
i   python3-pyatspi                                     - Assistive Technology Service Provider Interface - Python3 bin
i   python3-pymacaroons                                 - Macaroon library for Python 3                                
i   python3-renderpm                                    - python low level render interface                            
i   python3-reportlab                                   - ReportLab library to create PDF documents using Python3      
i   python3-reportlab-accel                             - C coded extension accelerator for the ReportLab Toolkit      
i   python3-requests                                    - elegant and simple HTTP library for Python3, built for human 
i A python3-requests-unixsocket                         - Use requests to talk HTTP via a UNIX domain socket - Python 3
i   python3-simplejson                                  - simple, fast, extensible JSON encoder/decoder for Python 3.x 
i   python3-six                                         - Python 2 and 3 compatibility library (Python 3 interface)    
i   python3-software-properties                         - manage the repositories that you install software from       
i   python3-speechd                                     - Python interface to Speech Dispatcher                        
i   python3-systemd                                     - Python 3 bindings for systemd                                
i   python3-tz                                          - Python3 version of the Olson timezone database               
i   python3-uno                                         - Python-UNO bridge                                            
i   python3-update-manager                              - python 3.x module for update-manager                         
i   python3-urllib3                                     - HTTP library with thread-safe connection pooling for Python3 
i   python3-wadllib                                     - Python 3 library for navigating WADL files                   
i   python3-xdg                                         - Python 3 library to access freedesktop.org standards         
i   python3-xkit                                        - library for the manipulation of xorg.conf files (Python 3)   
i   python3-yaml                                        - YAML parser and emitter for Python3                          
i   python3-zope.interface                              - Interfaces for Python3                                       
i   python3.6                                           - Interactive high-level object-oriented language (version 3.6)
i A python3.6-dev                                       - Header files and a static library for Python (v3.6)          
i   python3.6-minimal                                   - Minimal subset of the Python language (version 3.6)          
paul@Picayune-beaver:~/Documents/code/pytorch/build$ aptitude search '~i pip'
i   libpipeline1                                        - pipeline manipulation library                                
i   python-pip                                          - Python package installer                                     
i A python-pip-whl                                      - Python package installer                                     
paul@Picayune-beaver:~/Documents/code/pytorch/build$ aptitude search '~i gcc'
i A gcc                                                 - GNU C compiler                                               
i   gcc-5                                               - GNU C compiler                                               
i A gcc-5-base                                          - GCC, the GNU Compiler Collection (base package)              
i   gcc-6                                               - GNU C compiler                                               
i A gcc-6-base                                          - GCC, the GNU Compiler Collection (base package)              
i A gcc-7                                               - GNU C compiler                                               
i   gcc-7-base                                          - GCC, the GNU Compiler Collection (base package)              
i   gcc-8-base                                          - GCC, the GNU Compiler Collection (base package)              
i A gcc-8-base:i386                                     - GCC, the GNU Compiler Collection (base package)              
i A libgcc-5-dev                                        - GCC support library (development files)                      
i   libgcc-6-dev                                        - GCC support library (development files)                      
i A libgcc-7-dev                                        - GCC support library (development files)                      
i   libgcc1                                             - GCC support library                                          
i A libgcc1:i386                                        - GCC support library                                          
paul@Picayune-beaver:~/Documents/code/pytorch/build$ aptitude search '~i nvidia'
i A libnvidia-cfg1-390                                  - NVIDIA binary OpenGL/GLX configuration library               
i A libnvidia-common-390                                - Shared files used by the NVIDIA libraries                    
i A libnvidia-compute-390                               - NVIDIA libcompute package                                    
i A libnvidia-compute-390:i386                          - NVIDIA libcompute package                                    
i A libnvidia-decode-390                                - NVIDIA Video Decoding runtime libraries                      
i A libnvidia-decode-390:i386                           - NVIDIA Video Decoding runtime libraries                      
i A libnvidia-encode-390                                - NVENC Video Encoding runtime library                         
i A libnvidia-encode-390:i386                           - NVENC Video Encoding runtime library                         
i A libnvidia-fbc1-390                                  - NVIDIA OpenGL-based Framebuffer Capture runtime library      
i A libnvidia-fbc1-390:i386                             - NVIDIA OpenGL-based Framebuffer Capture runtime library      
i A libnvidia-gl-390                                    - NVIDIA OpenGL/GLX/EGL/GLES GLVND libraries and Vulkan ICD    
i A libnvidia-gl-390:i386                               - NVIDIA OpenGL/GLX/EGL/GLES GLVND libraries and Vulkan ICD    
i A libnvidia-ifr1-390                                  - NVIDIA OpenGL-based Inband Frame Readback runtime library    
i A libnvidia-ifr1-390:i386                             - NVIDIA OpenGL-based Inband Frame Readback runtime library    
i A nvidia-compute-390                                  - NVIDIA computing metapackage                                 
i A nvidia-compute-no-dkms-390                          - NVIDIA computing metapackage - no DKMS                       
i A nvidia-compute-utils-390                            - NVIDIA compute utilities                                     
i A nvidia-dkms-390                                     - NVIDIA DKMS package                                          
i   nvidia-driver-390                                   - NVIDIA driver metapackage                                    
i A nvidia-kernel-source-390                            - NVIDIA kernel source package                                 
i A nvidia-settings                                     - Tool for configuring the NVIDIA graphics driver              
i A nvidia-utils-390                                    - NVIDIA driver support binaries                               
i A xserver-xorg-video-nvidia-390                       - NVIDIA binary Xorg driver 
pjh5 commented 6 years ago

It's best to run Caffe2 from anywhere except for the build folder, since the local directories can confuse Python import statements.

@iGoog if your pip needs sudo then I suggest using a python which does not have this requirement. Can you read https://caffe2.ai/docs/faq.html#why-do-i-get-import-errors-in-python-when-i-try-to-use-caffe2 and https://caffe2.ai/docs/faq.html#how-can-i-find-a-file-library-or-package-on-my-computer and tell me where your libcaffe2.so is and what it's linked against?