mit-han-lab / torchsparse

[MICRO'23, MLSys'22] TorchSparse: Efficient Training and Inference Framework for Sparse Convolution on GPUs.
https://torchsparse.mit.edu
MIT License
1.2k stars 138 forks source link

[Installation] <Build Failed> #144

Closed libing64 closed 2 years ago

libing64 commented 2 years ago

Is there an existing issue for this?

Have you followed all the steps in the FAQ?

Current Behavior

No response

Error Line

subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

Environment

- GCC: gcc version 7.5.0 (Ubuntu 7.5.0-3ubuntu1~18.04)
- NVCC: Cuda compilation tools, release 10.2, V10.2.89
- PyTorch:1.7.1
- PyTorch CUDA:10.2

Full Error Log

Error Log pip install --upgrade git+https://github.com/mit-han-lab/torchsparse.git Collecting git+https://github.com/mit-han-lab/torchsparse.git Cloning https://github.com/mit-han-lab/torchsparse.git to /tmp/pip-req-build-h1lex__9 Running command git clone --filter=blob:none -q https://github.com/mit-han-lab/torchsparse.git /tmp/pip-req-build-h1lex__9 Resolved https://github.com/mit-han-lab/torchsparse.git to commit 3cf80daa7d266de05b1c8b3512838312e35e6757 Preparing metadata (setup.py) ... done Building wheels for collected packages: torchsparse Building wheel for torchsparse (setup.py) ... error ERROR: Command errored out with exit status 1: command: /home/libing/anaconda3/envs/nvidia-semseg/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-h1lex__9/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-h1lex__9/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-t2z73o4p cwd: /tmp/pip-req-build-h1lex__9/ Complete output (273 lines): running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-3.6 creating build/lib.linux-x86_64-3.6/torchsparse copying torchsparse/__init__.py -> build/lib.linux-x86_64-3.6/torchsparse copying torchsparse/tensor.py -> build/lib.linux-x86_64-3.6/torchsparse copying torchsparse/operators.py -> build/lib.linux-x86_64-3.6/torchsparse copying torchsparse/version.py -> build/lib.linux-x86_64-3.6/torchsparse creating build/lib.linux-x86_64-3.6/torchsparse/nn copying torchsparse/nn/__init__.py -> build/lib.linux-x86_64-3.6/torchsparse/nn creating build/lib.linux-x86_64-3.6/torchsparse/utils copying torchsparse/utils/__init__.py -> build/lib.linux-x86_64-3.6/torchsparse/utils copying torchsparse/utils/utils.py -> build/lib.linux-x86_64-3.6/torchsparse/utils copying torchsparse/utils/quantize.py -> build/lib.linux-x86_64-3.6/torchsparse/utils copying torchsparse/utils/collate.py -> build/lib.linux-x86_64-3.6/torchsparse/utils creating build/lib.linux-x86_64-3.6/torchsparse/nn/functional copying torchsparse/nn/functional/count.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/functional copying torchsparse/nn/functional/__init__.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/functional copying torchsparse/nn/functional/devoxelize.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/functional copying torchsparse/nn/functional/crop.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/functional copying torchsparse/nn/functional/query.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/functional copying torchsparse/nn/functional/pooling.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/functional copying torchsparse/nn/functional/hash.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/functional copying torchsparse/nn/functional/activation.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/functional copying torchsparse/nn/functional/voxelize.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/functional copying torchsparse/nn/functional/downsample.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/functional copying torchsparse/nn/functional/conv.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/functional creating build/lib.linux-x86_64-3.6/torchsparse/nn/modules copying torchsparse/nn/modules/__init__.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/modules copying torchsparse/nn/modules/crop.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/modules copying torchsparse/nn/modules/pooling.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/modules copying torchsparse/nn/modules/norm.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/modules copying torchsparse/nn/modules/bev.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/modules copying torchsparse/nn/modules/activation.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/modules copying torchsparse/nn/modules/conv.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/modules creating build/lib.linux-x86_64-3.6/torchsparse/nn/utils copying torchsparse/nn/utils/__init__.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/utils copying torchsparse/nn/utils/apply.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/utils copying torchsparse/nn/utils/kernel.py -> build/lib.linux-x86_64-3.6/torchsparse/nn/utils running build_ext building 'torchsparse.backend' extension creating /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6 creating /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse creating /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend creating /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/others creating /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/hash creating /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/voxelize creating /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/hashmap creating /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/devoxelize creating /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/convolution Emitting ninja build file /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/build.ninja... Compiling objects... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) [1/15] c++ -MMD -MF /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/hashmap/hashmap_cpu.o.d -pthread -B /home/libing/anaconda3/envs/nvidia-semseg/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/hashmap/hashmap_cpu.cpp -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/hashmap/hashmap_cpu.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [2/15] /usr/local/cuda-10.2/bin/nvcc -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/hashmap/hashmap_cuda.cu -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/hashmap/hashmap_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=sm_75 -std=c++14 /tmp/pip-req-build-h1lex__9/torchsparse/backend/hashmap/hashmap_cuda.cu(28): warning: argument is incompatible with corresponding format string conversion [3/15] /usr/local/cuda-10.2/bin/nvcc -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/devoxelize/devoxelize_cuda.cu -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/devoxelize/devoxelize_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=sm_75 -std=c++14 FAILED: /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/devoxelize/devoxelize_cuda.o /usr/local/cuda-10.2/bin/nvcc -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/devoxelize/devoxelize_cuda.cu -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/devoxelize/devoxelize_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=sm_75 -std=c++14 /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/boxing.h(100): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/op_registration/op_whitelist.h(39): warning: integer conversion resulted in a change of sign Killed [4/15] c++ -MMD -MF /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/devoxelize/devoxelize_cpu.o.d -pthread -B /home/libing/anaconda3/envs/nvidia-semseg/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/devoxelize/devoxelize_cpu.cpp -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/devoxelize/devoxelize_cpu.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [5/15] c++ -MMD -MF /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/others/query_cpu.o.d -pthread -B /home/libing/anaconda3/envs/nvidia-semseg/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/others/query_cpu.cpp -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/others/query_cpu.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [6/15] c++ -MMD -MF /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/convolution/convolution_cpu.o.d -pthread -B /home/libing/anaconda3/envs/nvidia-semseg/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cpu.cpp -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/convolution/convolution_cpu.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [7/15] c++ -MMD -MF /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/pybind_cuda.o.d -pthread -B /home/libing/anaconda3/envs/nvidia-semseg/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/pybind_cuda.cpp -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/pybind_cuda.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [8/15] c++ -MMD -MF /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/voxelize/voxelize_cpu.o.d -pthread -B /home/libing/anaconda3/envs/nvidia-semseg/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/voxelize/voxelize_cpu.cpp -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/voxelize/voxelize_cpu.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [9/15] c++ -MMD -MF /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/hash/hash_cpu.o.d -pthread -B /home/libing/anaconda3/envs/nvidia-semseg/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/hash/hash_cpu.cpp -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/hash/hash_cpu.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [10/15] c++ -MMD -MF /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/others/count_cpu.o.d -pthread -B /home/libing/anaconda3/envs/nvidia-semseg/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/others/count_cpu.cpp -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/others/count_cpu.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [11/15] /usr/local/cuda-10.2/bin/nvcc -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/convolution/convolution_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=sm_75 -std=c++14 /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/boxing.h(100): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/op_registration/op_whitelist.h(39): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/boxing.h(100): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/op_registration/op_whitelist.h(39): warning: integer conversion resulted in a change of sign /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu: In lambda function: /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu:138:45: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:277:1: note: declared here DeprecatedTypeProperties & type() const { ^ ~~ /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu:138:100: warning: ‘c10::ScalarType detail::scalar_type(const at::DeprecatedTypeProperties&)’ is deprecated: passing at::DeprecatedTypeProperties to an AT_DISPATCH macro is deprecated, pass an at::ScalarType instead [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/Dispatch.h:66:1: note: declared here inline at::ScalarType scalar_type(const at::DeprecatedTypeProperties& t) { ^~~~~~~~~~~ /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu: In lambda function: /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu:153:45: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:277:1: note: declared here DeprecatedTypeProperties & type() const { ^ ~~ /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu:153:100: warning: ‘c10::ScalarType detail::scalar_type(const at::DeprecatedTypeProperties&)’ is deprecated: passing at::DeprecatedTypeProperties to an AT_DISPATCH macro is deprecated, pass an at::ScalarType instead [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/Dispatch.h:66:1: note: declared here inline at::ScalarType scalar_type(const at::DeprecatedTypeProperties& t) { ^~~~~~~~~~~ /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu: In lambda function: /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu:238:45: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:277:1: note: declared here DeprecatedTypeProperties & type() const { ^ ~~ /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu:238:100: warning: ‘c10::ScalarType detail::scalar_type(const at::DeprecatedTypeProperties&)’ is deprecated: passing at::DeprecatedTypeProperties to an AT_DISPATCH macro is deprecated, pass an at::ScalarType instead [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/Dispatch.h:66:1: note: declared here inline at::ScalarType scalar_type(const at::DeprecatedTypeProperties& t) { ^~~~~~~~~~~ /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu: In lambda function: /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu:248:45: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:277:1: note: declared here DeprecatedTypeProperties & type() const { ^ ~~ /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu:248:100: warning: ‘c10::ScalarType detail::scalar_type(const at::DeprecatedTypeProperties&)’ is deprecated: passing at::DeprecatedTypeProperties to an AT_DISPATCH macro is deprecated, pass an at::ScalarType instead [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/Dispatch.h:66:1: note: declared here inline at::ScalarType scalar_type(const at::DeprecatedTypeProperties& t) { ^~~~~~~~~~~ /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu: In lambda function: /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu:266:45: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:277:1: note: declared here DeprecatedTypeProperties & type() const { ^ ~~ /tmp/pip-req-build-h1lex__9/torchsparse/backend/convolution/convolution_cuda.cu:266:100: warning: ‘c10::ScalarType detail::scalar_type(const at::DeprecatedTypeProperties&)’ is deprecated: passing at::DeprecatedTypeProperties to an AT_DISPATCH macro is deprecated, pass an at::ScalarType instead [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/Dispatch.h:66:1: note: declared here inline at::ScalarType scalar_type(const at::DeprecatedTypeProperties& t) { ^~~~~~~~~~~ [12/15] /usr/local/cuda-10.2/bin/nvcc -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/others/count_cuda.cu -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/others/count_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=sm_75 -std=c++14 /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/boxing.h(100): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/op_registration/op_whitelist.h(39): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/boxing.h(100): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/op_registration/op_whitelist.h(39): warning: integer conversion resulted in a change of sign [13/15] /usr/local/cuda-10.2/bin/nvcc -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/hash/hash_cuda.cu -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/hash/hash_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=sm_75 -std=c++14 /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/boxing.h(100): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/op_registration/op_whitelist.h(39): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/boxing.h(100): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/op_registration/op_whitelist.h(39): warning: integer conversion resulted in a change of sign [14/15] /usr/local/cuda-10.2/bin/nvcc -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/voxelize/voxelize_cuda.cu -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/voxelize/voxelize_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=sm_75 -std=c++14 /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/boxing.h(100): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/op_registration/op_whitelist.h(39): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/boxing.h(100): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/op_registration/op_whitelist.h(39): warning: integer conversion resulted in a change of sign /tmp/pip-req-build-h1lex__9/torchsparse/backend/voxelize/voxelize_cuda.cu: In lambda function: /tmp/pip-req-build-h1lex__9/torchsparse/backend/voxelize/voxelize_cuda.cu:53:44: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:277:1: note: declared here DeprecatedTypeProperties & type() const { ^ ~~ /tmp/pip-req-build-h1lex__9/torchsparse/backend/voxelize/voxelize_cuda.cu:53:99: warning: ‘c10::ScalarType detail::scalar_type(const at::DeprecatedTypeProperties&)’ is deprecated: passing at::DeprecatedTypeProperties to an AT_DISPATCH macro is deprecated, pass an at::ScalarType instead [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/Dispatch.h:66:1: note: declared here inline at::ScalarType scalar_type(const at::DeprecatedTypeProperties& t) { ^~~~~~~~~~~ /tmp/pip-req-build-h1lex__9/torchsparse/backend/voxelize/voxelize_cuda.cu: In lambda function: /tmp/pip-req-build-h1lex__9/torchsparse/backend/voxelize/voxelize_cuda.cu:72:46: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:277:1: note: declared here DeprecatedTypeProperties & type() const { ^ ~~ /tmp/pip-req-build-h1lex__9/torchsparse/backend/voxelize/voxelize_cuda.cu:72:101: warning: ‘c10::ScalarType detail::scalar_type(const at::DeprecatedTypeProperties&)’ is deprecated: passing at::DeprecatedTypeProperties to an AT_DISPATCH macro is deprecated, pass an at::ScalarType instead [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES_AND_HALF( ^ /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/Dispatch.h:66:1: note: declared here inline at::ScalarType scalar_type(const at::DeprecatedTypeProperties& t) { ^~~~~~~~~~~ [15/15] /usr/local/cuda-10.2/bin/nvcc -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/TH -I/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/libing/anaconda3/envs/nvidia-semseg/include/python3.6m -c -c /tmp/pip-req-build-h1lex__9/torchsparse/backend/others/query_cuda.cu -o /tmp/pip-req-build-h1lex__9/build/temp.linux-x86_64-3.6/torchsparse/backend/others/query_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=sm_75 -std=c++14 /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/boxing.h(100): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/op_registration/op_whitelist.h(39): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/boxing.h(100): warning: integer conversion resulted in a change of sign /home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/include/ATen/core/op_registration/op_whitelist.h(39): warning: integer conversion resulted in a change of sign ninja: build stopped: subcommand failed. Traceback (most recent call last): File "/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1539, in _run_ninja_build env=env) File "/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/subprocess.py", line 438, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "", line 1, in File "/tmp/pip-req-build-h1lex__9/setup.py", line 40, in zip_safe=False, File "/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/setuptools/__init__.py", line 153, in setup return distutils.core.setup(**attrs) File "/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/distutils/core.py", line 148, in setup dist.run_commands() File "/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/distutils/dist.py", line 955, in run_commands self.run_command(cmd) File "/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/wheel/bdist_wheel.py", line 299, in run self.run_command('build') File "/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/distutils/command/build.py", line 135, in run self.run_command(cmd_name) File "/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/setuptools/command/build_ext.py", line 79, in run _build_ext.run(self) File "/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/distutils/command/build_ext.py", line 339, in run self.build_extensions() File "/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 670, in build_extensions build_ext.build_extensions(self) File "/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/distutils/command/build_ext.py", line 448, in build_extensions self._build_extensions_serial() File "/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/distutils/command/build_ext.py", line 473, in _build_extensions_serial self.build_extension(ext) File "/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/setuptools/command/build_ext.py", line 202, in build_extension _build_ext.build_extension(self, ext) File "/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/distutils/command/build_ext.py", line 533, in build_extension depends=ext.depends) File "/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 500, in unix_wrap_ninja_compile with_cuda=with_cuda) File "/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1255, in _write_ninja_file_and_compile_objects error_prefix='Error compiling objects for extension') File "/home/libing/anaconda3/envs/nvidia-semseg/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1555, in _run_ninja_build raise RuntimeError(message) from e RuntimeError: Error compiling objects for extension ---------------------------------------- ERROR: Failed building wheel for torchsparse Running setup.py clean for torchsparse Failed to build torchsparse Installing collected packages: torchsparse Running setup.py install for torchsparse ... done DEPRECATION: torchsparse was installed using the legacy 'setup.py install' method, because a wheel could not be built for it. A possible replacement is to fix the wheel build issue reported above. Discussion can be found at https://github.com/pypa/pip/issues/8368 Successfully installed torchsparse-1.4.0 (nvidia-semseg) libing@drl-dz0599:~/source/semantic_segment/torchsparse$
libing64 commented 2 years ago

fixed by changing "ninja" "-v" to "ninja" "--version" in torch/utils/cpp_extension.py

https://www.i4k.xyz/article/weixin_43731803/116787152

vico1999-ros commented 1 year ago

pip install --upgrade git+https://github.com/mit-han-lab/torchsparse.git@v1.4.0 i use this command .and it show

Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting git+https://github.com/mit-han-lab/torchsparse.git Cloning https://github.com/mit-han-lab/torchsparse.git to /tmp/pip-req-build-onghpz7i Running command git clone --filter=blob:none --quiet https://github.com/mit-han-lab/torchsparse.git /tmp/pip-req-build-onghpz7i fatal: 无法访问 'https://github.com/mit-han-lab/torchsparse.git/':Could not resolve proxy: proxy.example.com error: subprocess-exited-with-error

× git clone --filter=blob:none --quiet https://github.com/mit-han-lab/torchsparse.git /tmp/pip-req-build-onghpz7i did not run successfully. │ exit code: 128 ╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error

× git clone --filter=blob:none --quiet https://github.com/mit-han-lab/torchsparse.git /tmp/pip-req-build-onghpz7i did not run successfully. │ exit code: 128 ╰─> See above for output