Open yifanlu0227 opened 5 months ago
Great question -- I never thought it would work with misaligned CUDA + Pytorch versions lol. I always just make them aligned. Would love to learn it as well if anyone knows why
Yes, I always use same version. CUDA 11.8 and Torch 2.0.1+cu118. It was success to build gsplat==1.0.0
from latest github source in my experiment. But I also curious why it works when it was different.
Hi nerfstudio guys, thanks for your excellent library!
I have a minor question, it is usually required that we have a consistent system CUDA toolkit version and Pytorch runtime CUDA version to compile cuda extensions.
For example, in
diff-gaussian-rasterization
compilation, usingCUDA 12.1 + Pytorch (cuda 11.8)
will raise the following error:This can also happen when build
gsplat
from source viaBut I find
CUDA 12.1 + Pytorch (cuda 11.8)
works when usingpip install gsplat
and build the CUDA code on the first run (JIT). In the~/.cache/torch_extensions/py310_cu118/gsplat_cuda/build.ninja
, it still uses the system nvcc. How can JIT succeed in bypass the misalignment in System CUDA and Pytorch CUDA version?build.ninja
```ninja ninja_required_version = 1.3 cxx = c++ nvcc = /usr/local/cuda/bin/nvcc cflags = -DTORCH_EXTENSION_NAME=gsplat_cuda -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/yiflu/miniconda3/envs/gs_env_tmp/lib/python3.10/site-packages/gsplat/cuda/csrc -isystem /yiflu/miniconda3/envs/gs_env_tmp/lib/python3.10/site-packages/torch/include -isystem /yiflu/miniconda3/envs/gs_env_tmp/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /yiflu/miniconda3/envs/gs_env_tmp/lib/python3.10/site-packages/torch/include/TH -isystem /yiflu/miniconda3/envs/gs_env_tmp/lib/python3.10/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /yiflu/miniconda3/envs/gs_env_tmp/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++17 -O3 post_cflags = cuda_cflags = -DTORCH_EXTENSION_NAME=gsplat_cuda -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/yiflu/miniconda3/envs/gs_env_tmp/lib/python3.10/site-packages/gsplat/cuda/csrc -isystem /yiflu/miniconda3/envs/gs_env_tmp/lib/python3.10/site-packages/torch/include -isystem /yiflu/miniconda3/envs/gs_env_tmp/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /yiflu/miniconda3/envs/gs_env_tmp/lib/python3.10/site-packages/torch/include/TH -isystem /yiflu/miniconda3/envs/gs_env_tmp/lib/python3.10/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /yiflu/miniconda3/envs/gs_env_tmp/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_70,code=compute_70 -gencode=arch=compute_70,code=sm_70 --compiler-options '-fPIC' -O3 --use_fast_math -std=c++17 cuda_post_cflags = cuda_dlink_post_cflags = ldflags = -shared -L/yiflu/miniconda3/envs/gs_env_tmp/lib/python3.10/site-packages/torch/lib -lc10 -lc10_cuda -ltorch_cpu -ltorch_cuda -ltorch -ltorch_python -L/usr/local/cuda/lib64 -lcudart rule compile command = $cxx -MMD -MF $out.d $cflags -c $in -o $out $post_cflags depfile = $out.d deps = gcc rule cuda_compile depfile = $out.d deps = gcc command = $nvcc $cuda_cflags -c $in -o $out $cuda_post_cflags rule link command = $cxx $in $ldflags -o $out build rasterization.cuda.o: cuda_compile /yiflu/miniconda3/envs/gs_env_tmp/lib/python3.10/site-packages/gsplat/cuda/csrc/rasterization.cu build projection.cuda.o: cuda_compile /yiflu/miniconda3/envs/gs_env_tmp/lib/python3.10/site-packages/gsplat/cuda/csrc/projection.cu build sh.cuda.o: cuda_compile /yiflu/miniconda3/envs/gs_env_tmp/lib/python3.10/site-packages/gsplat/cuda/csrc/sh.cu build ext.o: compile /yiflu/miniconda3/envs/gs_env_tmp/lib/python3.10/site-packages/gsplat/cuda/csrc/ext.cpp build gsplat_cuda.so: link rasterization.cuda.o projection.cuda.o sh.cuda.o ext.o default gsplat_cuda.so ```