Closed libing64 closed 2 years ago
fixed by changing "ninja" "-v" to "ninja" "--version" in torch/utils/cpp_extension.py
pip install --upgrade git+https://github.com/mit-han-lab/torchsparse.git@v1.4.0 i use this command .and it show
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting git+https://github.com/mit-han-lab/torchsparse.git Cloning https://github.com/mit-han-lab/torchsparse.git to /tmp/pip-req-build-onghpz7i Running command git clone --filter=blob:none --quiet https://github.com/mit-han-lab/torchsparse.git /tmp/pip-req-build-onghpz7i fatal: 无法访问 'https://github.com/mit-han-lab/torchsparse.git/':Could not resolve proxy: proxy.example.com error: subprocess-exited-with-error
× git clone --filter=blob:none --quiet https://github.com/mit-han-lab/torchsparse.git /tmp/pip-req-build-onghpz7i did not run successfully. │ exit code: 128 ╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error
× git clone --filter=blob:none --quiet https://github.com/mit-han-lab/torchsparse.git /tmp/pip-req-build-onghpz7i did not run successfully. │ exit code: 128 ╰─> See above for output
Is there an existing issue for this?
Have you followed all the steps in the FAQ?
Current Behavior
No response
Error Line
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
Environment
Full Error Log
Error Log
pip install --upgrade git+https://github.com/mit-han-lab/torchsparse.git Collecting git+https://github.com/mit-han-lab/torchsparse.git Cloning https://github.com/mit-han-lab/torchsparse.git to /tmp/pip-req-build-h1lex__9 Running command git clone --filter=blob:none -q https://github.com/mit-han-lab/torchsparse.git /tmp/pip-req-build-h1lex__9 Resolved https://github.com/mit-han-lab/torchsparse.git to commit 3cf80daa7d266de05b1c8b3512838312e35e6757 Preparing metadata (setup.py) ... done Building wheels for collected packages: torchsparse Building wheel for torchsparse (setup.py) ... error ERROR: Command errored out with exit status 1: command: /home/libing/anaconda3/envs/nvidia-semseg/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-h1lex__9/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-h1lex__9/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-t2z73o4p cwd: /tmp/pip-req-build-h1lex__9/ Complete output (273 lines): running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-3.6 creating build/lib.linux-x86_64-3.6/torchsparse copying torchsparse/__init__.py -> build/lib.linux-x86_64-3.6/torchsparse copying torchsparse/tensor.py -> build/lib.linux-x86_64-3.6/torchsparse copying torchsparse/operators.py -> build/lib.linux-x86_64-3.6/torchsparse copying torchsparse/version.py -> build/lib.linux-x86_64-3.6/torchsparse creating build/lib.linux-x86_64-3.6/torchsparse/nn copying torchsparse/nn/__init__.py -> build/lib.linux-x86_64-3.6/torchsparse/nn creating build/lib.linux-x86_64-3.6/torchsparse/utils copying torchsparse/utils/__init__.py -> build/lib.linux-x86_64-3.6/torchsparse/utils copying torchsparse/utils/utils.py -> build/lib.linux-x86_64-3.6/torchsparse/utils copying torchsparse/utils/quantize.py -> build/lib.linux-x86_64-3.6/torchsparse/utils copying torchsparse/utils/collate.py -> build/lib.linux-x86_64-3.6/torchsparse/utils creating build/lib.linux-x86_64-3.6/torchsparse/nn/functional copying torchsparse/nn/functional/count.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/functional copying torchsparse/nn/functional/__init__.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/functional copying torchsparse/nn/functional/devoxelize.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/functional copying torchsparse/nn/functional/crop.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/functional copying torchsparse/nn/functional/query.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/functional copying torchsparse/nn/functional/pooling.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/functional copying torchsparse/nn/functional/hash.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/functional copying torchsparse/nn/functional/activation.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/functional copying torchsparse/nn/functional/voxelize.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/functional copying torchsparse/nn/functional/downsample.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/functional copying torchsparse/nn/functional/conv.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/functional creating build/lib.linux-x86_64-3.6/torchsparse/nn/modules copying torchsparse/nn/modules/__init__.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/modules copying torchsparse/nn/modules/crop.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/modules copying torchsparse/nn/modules/pooling.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/modules copying torchsparse/nn/modules/norm.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/modules copying torchsparse/nn/modules/bev.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/modules copying torchsparse/nn/modules/activation.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/modules copying torchsparse/nn/modules/conv.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/modules creating build/lib.linux-x86_64-3.6/torchsparse/nn/utils copying torchsparse/nn/utils/__init__.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/utils copying torchsparse/nn/utils/apply.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/utils copying torchsparse/nn/utils/kernel.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/utils running build_ext building 'torchsparse.backend' extension creating /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6 creating /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse creating /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend creating /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/others creating /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/hash creating /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/voxelize creating /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/hashmap creating /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/devoxelize creating /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/convolution Emitting ninja build file /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/build.ninja... Compiling objects... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) [1/15] c++ -MMD -MF /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/hashmap/hashmap_cpu.o.d -pthread -B /home/libing/anaconda3/envs/nvidia-semseg/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/hashmap/hashmap_cpu.cpp -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/hashmap/hashmap_cpu.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [2/15] /usr/local/cuda-10.2/bin/nvcc -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/hashmap/hashmap_cuda.cu -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/hashmap/hashmap_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=sm_75 -std=c++14 /tmp/pip-req-build-h1lex__9/torchsparse/backend/hashmap/hashmap_cuda.cu(28): warning: argument is incompatible with corresponding format string conversion [3/15] /usr/local/cuda-10.2/bin/nvcc -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/devoxelize/devoxelize_cuda.cu -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/devoxelize/devoxelize_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=sm_75 -std=c++14 FAILED: /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/devoxelize/devoxelize_cuda.o /usr/local/cuda-10.2/bin/nvcc -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/devoxelize/devoxelize_cuda.cu -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/devoxelize/devoxelize_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=sm_75 -std=c++14 /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/boxing.h(100): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/op_registration/op_whitelist.h(39): warning: integer conversion resulted in a change of sign Killed [4/15] c++ -MMD -MF /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/devoxelize/devoxelize_cpu.o.d -pthread -B /home/libing/anaconda3/envs/nvidia-semseg/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/devoxelize/devoxelize_cpu.cpp -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/devoxelize/devoxelize_cpu.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [5/15] c++ -MMD -MF /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/others/query_cpu.o.d -pthread -B /home/libing/anaconda3/envs/nvidia-semseg/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/others/query_cpu.cpp -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/others/query_cpu.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [6/15] c++ -MMD -MF /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/convolution/convolution_cpu.o.d -pthread -B /home/libing/anaconda3/envs/nvidia-semseg/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cpu.cpp -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/convolution/convolution_cpu.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [7/15] c++ -MMD -MF /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/pybind_cuda.o.d -pthread -B /home/libing/anaconda3/envs/nvidia-semseg/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/pybind_cuda.cpp -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/pybind_cuda.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [8/15] c++ -MMD -MF /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/voxelize/voxelize_cpu.o.d -pthread -B /home/libing/anaconda3/envs/nvidia-semseg/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/voxelize/voxelize_cpu.cpp -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/voxelize/voxelize_cpu.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [9/15] c++ -MMD -MF /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/hash/hash_cpu.o.d -pthread -B /home/libing/anaconda3/envs/nvidia-semseg/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/hash/hash_cpu.cpp -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/hash/hash_cpu.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [10/15] c++ -MMD -MF /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/others/count_cpu.o.d -pthread -B /home/libing/anaconda3/envs/nvidia-semseg/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/others/count_cpu.cpp -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/others/count_cpu.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [11/15] /usr/local/cuda-10.2/bin/nvcc -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/convolution/convolution_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=sm_75 -std=c++14 /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/boxing.h(100): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/op_registration/op_whitelist.h(39): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/boxing.h(100): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/op_registration/op_whitelist.h(39): warning: integer conversion resulted in a change of sign /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu: In lambda function: /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu:138:45: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:277:1: note: declared here DeprecatedTypeProperties & type() const { ^ ~~ /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu:138:100: warning: ‘c10::ScalarType detail::scalar_type(const at::DeprecatedTypeProperties&)’ is deprecated: passing at::DeprecatedTypeProperties to an AT_DISPATCH macro is deprecated, pass an at::ScalarType instead [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/Dispatch.h:66:1: note: declared here inline at::ScalarType scalar_type(const at::DeprecatedTypeProperties& t) { ^~~~~~~~~~~ /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu: In lambda function: /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu:153:45: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:277:1: note: declared here DeprecatedTypeProperties & type() const { ^ ~~ /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu:153:100: warning: ‘c10::ScalarType detail::scalar_type(const at::DeprecatedTypeProperties&)’ is deprecated: passing at::DeprecatedTypeProperties to an AT_DISPATCH macro is deprecated, pass an at::ScalarType instead [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/Dispatch.h:66:1: note: declared here inline at::ScalarType scalar_type(const at::DeprecatedTypeProperties& t) { ^~~~~~~~~~~ /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu: In lambda function: /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu:238:45: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:277:1: note: declared here DeprecatedTypeProperties & type() const { ^ ~~ /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu:238:100: warning: ‘c10::ScalarType detail::scalar_type(const at::DeprecatedTypeProperties&)’ is deprecated: passing at::DeprecatedTypeProperties to an AT_DISPATCH macro is deprecated, pass an at::ScalarType instead [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/Dispatch.h:66:1: note: declared here inline at::ScalarType scalar_type(const at::DeprecatedTypeProperties& t) { ^~~~~~~~~~~ /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu: In lambda function: /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu:248:45: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:277:1: note: declared here DeprecatedTypeProperties & type() const { ^ ~~ /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu:248:100: warning: ‘c10::ScalarType detail::scalar_type(const at::DeprecatedTypeProperties&)’ is deprecated: passing at::DeprecatedTypeProperties to an AT_DISPATCH macro is deprecated, pass an at::ScalarType instead [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/Dispatch.h:66:1: note: declared here inline at::ScalarType scalar_type(const at::DeprecatedTypeProperties& t) { ^~~~~~~~~~~ /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu: In lambda function: /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu:266:45: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:277:1: note: declared here DeprecatedTypeProperties & type() const { ^ ~~ /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu:266:100: warning: ‘c10::ScalarType detail::scalar_type(const at::DeprecatedTypeProperties&)’ is deprecated: passing at::DeprecatedTypeProperties to an AT_DISPATCH macro is deprecated, pass an at::ScalarType instead [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/Dispatch.h:66:1: note: declared here inline at::ScalarType scalar_type(const at::DeprecatedTypeProperties& t) { ^~~~~~~~~~~ [12/15] /usr/local/cuda-10.2/bin/nvcc -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/others/count_cuda.cu -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/others/count_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=sm_75 -std=c++14 /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/boxing.h(100): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/op_registration/op_whitelist.h(39): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/boxing.h(100): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/op_registration/op_whitelist.h(39): warning: integer conversion resulted in a change of sign [13/15] /usr/local/cuda-10.2/bin/nvcc -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/hash/hash_cuda.cu -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/hash/hash_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=sm_75 -std=c++14 /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/boxing.h(100): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/op_registration/op_whitelist.h(39): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/boxing.h(100): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/op_registration/op_whitelist.h(39): warning: integer conversion resulted in a change of sign [14/15] /usr/local/cuda-10.2/bin/nvcc -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/voxelize/voxelize_cuda.cu -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/voxelize/voxelize_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=sm_75 -std=c++14 /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/boxing.h(100): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/op_registration/op_whitelist.h(39): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/boxing.h(100): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/op_registration/op_whitelist.h(39): warning: integer conversion resulted in a change of sign /tmp/pip-req-build-h1lex__9/torchsparse/backend/voxelize/voxelize_cuda.cu: In lambda function: /tmp/pip-req-build-h1lex__9/torchsparse/backend/voxelize/voxelize_cuda.cu:53:44: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:277:1: note: declared here DeprecatedTypeProperties & type() const { ^ ~~ /tmp/pip-req-build-h1lex__9/torchsparse/backend/voxelize/voxelize_cuda.cu:53:99: warning: ‘c10::ScalarType detail::scalar_type(const at::DeprecatedTypeProperties&)’ is deprecated: passing at::DeprecatedTypeProperties to an AT_DISPATCH macro is deprecated, pass an at::ScalarType instead [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/Dispatch.h:66:1: note: declared here inline at::ScalarType scalar_type(const at::DeprecatedTypeProperties& t) { ^~~~~~~~~~~ /tmp/pip-req-build-h1lex__9/torchsparse/backend/voxelize/voxelize_cuda.cu: In lambda function: /tmp/pip-req-build-h1lex__9/torchsparse/backend/voxelize/voxelize_cuda.cu:72:46: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:277:1: note: declared here DeprecatedTypeProperties & type() const { ^ ~~ /tmp/pip-req-build-h1lex__9/torchsparse/backend/voxelize/voxelize_cuda.cu:72:101: warning: ‘c10::ScalarType detail::scalar_type(const at::DeprecatedTypeProperties&)’ is deprecated: passing at::DeprecatedTypeProperties to an AT_DISPATCH macro is deprecated, pass an at::ScalarType instead [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/Dispatch.h:66:1: note: declared here inline at::ScalarType scalar_type(const at::DeprecatedTypeProperties& t) { ^~~~~~~~~~~ [15/15] /usr/local/cuda-10.2/bin/nvcc -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/others/query_cuda.cu -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/others/query_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=sm_75 -std=c++14 /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/boxing.h(100): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/op_registration/op_whitelist.h(39): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/boxing.h(100): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/op_registration/op_whitelist.h(39): warning: integer conversion resulted in a change of sign ninja: build stopped: subcommand failed. Traceback (most recent call last): File "/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1539, in _run_ninja_build env=env) File "/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/subprocess.py", line 438, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "