I tried to build k2 inside a docker image provided by nvidia. (nvcr.io/nvidia/pytorch:21.06-py3)
PyTorch version: 1.9.0a0+c3d40fd
PyTorch cuda version: 11.3
Build command was python setup.py install
The error below occurred when compiling deserialization.cu
/workspace/k2/k2/torch/csrc/deserialization.cu(404): error: no suitable constructor exists to convert from "const char [1]" to "c10::optional<torch::jit::TypeResolver>"
/workspace/k2/k2/torch/csrc/deserialization.cu(404): error: no suitable constructor exists to convert from "const char [1]" to "c10::optional<torch::jit::ObjLoader>"
/workspace/k2/k2/torch/csrc/deserialization.cu(404): error: no suitable user-defined conversion from "lambda [](const c10::QualifiedName &)->c10::StrongTypePtr" to "c10::optional<c10::Device>" exists
/workspace/k2/k2/torch/csrc/deserialization.cu(404): error: a reference of type "caffe2::serialize::PyTorchStreamReader &" (not const-qualified) cannot be initialized with a value of type "lambda [](c10::StrongTypePtr, c10::IValue)->c10::intrusive_ptr<c10::ivalue::Object, c10::detail::intrusive_target_default_null_type<c10::ivalue::Object>>"
/workspace/k2/k2/torch/csrc/deserialization.cu(405): error: too many arguments in function call
I tried to build k2 inside a docker image provided by nvidia. (nvcr.io/nvidia/pytorch:21.06-py3)
Build command was
python setup.py install
The error below occurred when compiling deserialization.cu