NVlabs / tiny-cuda-nn

Lightning fast C++/CUDA neural network framework
Other
3.74k stars 455 forks source link

Cannot Install Tinycuda inside nerfstudio #208

Open theworldisonfire opened 1 year ago

theworldisonfire commented 1 year ago

Trying to use nerfstudio. I get to the second line in this step.

> pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 -f https://download.pytorch.org/whl/torch_stable.html
> pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

And I am getting this error log.

(nerfstudio) C:\Users\Dylan>pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
Collecting git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
  Cloning https://github.com/NVlabs/tiny-cuda-nn/ to c:\users\dylan\appdata\local\temp\pip-req-build-ywxto29a
  Running command git clone --filter=blob:none --quiet https://github.com/NVlabs/tiny-cuda-nn/ 'C:\Users\Dylan\AppData\Local\Temp\pip-req-build-ywxto29a'
  Resolved https://github.com/NVlabs/tiny-cuda-nn/ to commit ea09e160960ee37a067edb4ad65a255705307961
  Running command git submodule update --init --recursive -q
  Preparing metadata (setup.py) ... done
Building wheels for collected packages: tinycudann
  Building wheel for tinycudann (setup.py) ... error
  error: subprocess-exited-with-error

  × python setup.py bdist_wheel did not run successfully.
  │ exit code: 1
  ╰─> [36 lines of output]
      Building PyTorch extension for tiny-cuda-nn version 1.6
      Obtained compute capability 86 from PyTorch
      running bdist_wheel
      C:\Users\Dylan\.conda\envs\nerfstudio\lib\site-packages\torch\utils\cpp_extension.py:411: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
        warnings.warn(msg.format('we could not find ninja.'))
      running build
      running build_py
      creating build
      creating build\lib.win-amd64-cpython-38
      creating build\lib.win-amd64-cpython-38\tinycudann
      copying tinycudann\modules.py -> build\lib.win-amd64-cpython-38\tinycudann
      copying tinycudann\__init__.py -> build\lib.win-amd64-cpython-38\tinycudann
      running egg_info
      creating tinycudann.egg-info
      writing tinycudann.egg-info\PKG-INFO
      writing dependency_links to tinycudann.egg-info\dependency_links.txt
      writing top-level names to tinycudann.egg-info\top_level.txt
      writing manifest file 'tinycudann.egg-info\SOURCES.txt'
      reading manifest file 'tinycudann.egg-info\SOURCES.txt'
      writing manifest file 'tinycudann.egg-info\SOURCES.txt'
      copying tinycudann\bindings.cpp -> build\lib.win-amd64-cpython-38\tinycudann
      running build_ext
      C:\Users\Dylan\.conda\envs\nerfstudio\lib\site-packages\torch\utils\cpp_extension.py:813: UserWarning: The detected CUDA version (11.8) has a minor version mismatch with the version that was used to compile PyTorch (11.3). Most likely this shouldn't be a problem.
        warnings.warn(CUDA_MISMATCH_WARN.format(cuda_str_version, torch.version.cuda))
      building 'tinycudann_bindings_86._C' extension
      creating build\dependencies
      creating build\dependencies\fmt
      creating build\dependencies\fmt\src
      creating build\src
      creating build\temp.win-amd64-cpython-38
      creating build\temp.win-amd64-cpython-38\Release
      creating build\temp.win-amd64-cpython-38\Release\tinycudann
      "C:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Tools\MSVC\14.34.31933\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\Dylan\AppData\Local\Temp\pip-req-build-ywxto29a/include -IC:\Users\Dylan\AppData\Local\Temp\pip-req-build-ywxto29a/dependencies -IC:\Users\Dylan\AppData\Local\Temp\pip-req-build-ywxto29a/dependencies/cutlass/include -IC:\Users\Dylan\AppData\Local\Temp\pip-req-build-ywxto29a/dependencies/cutlass/tools/util/include -IC:\Users\Dylan\AppData\Local\Temp\pip-req-build-ywxto29a/dependencies/fmt/include -IC:\Users\Dylan\.conda\envs\nerfstudio\lib\site-packages\torch\include -IC:\Users\Dylan\.conda\envs\nerfstudio\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\Dylan\.conda\envs\nerfstudio\lib\site-packages\torch\include\TH -IC:\Users\Dylan\.conda\envs\nerfstudio\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include" -IC:\Users\Dylan\.conda\envs\nerfstudio\include -IC:\Users\Dylan\.conda\envs\nerfstudio\Include "-IC:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Tools\MSVC\14.34.31933\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Tools\MSVC\14.34.31933\ATLMFC\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" /EHsc /Tp../../dependencies/fmt/src/format.cc /Fobuild\temp.win-amd64-cpython-38\Release\../../dependencies/fmt/src/format.obj /MD /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /EHsc /std:c++14 -DTCNN_MIN_GPU_ARCH=86 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
      format.cc
      C:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Tools\MSVC\14.34.31933\include\cstdlib(12): fatal error C1083: Cannot open include file: 'math.h': No such file or directory
      error: command 'C:\\Program Files\\Microsoft Visual Studio\\2022\\Enterprise\\VC\\Tools\\MSVC\\14.34.31933\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for tinycudann
  Running setup.py clean for tinycudann
Failed to build tinycudann
Installing collected packages: tinycudann
  Running setup.py install for tinycudann ... error
  error: subprocess-exited-with-error

  × Running setup.py install for tinycudann did not run successfully.
  │ exit code: 1
  ╰─> [23 lines of output]
      Building PyTorch extension for tiny-cuda-nn version 1.6
      Obtained compute capability 86 from PyTorch
      running install
      C:\Users\Dylan\.conda\envs\nerfstudio\lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
        warnings.warn(
      running build
      running build_py
      running egg_info
      writing tinycudann.egg-info\PKG-INFO
      writing dependency_links to tinycudann.egg-info\dependency_links.txt
      writing top-level names to tinycudann.egg-info\top_level.txt
      C:\Users\Dylan\.conda\envs\nerfstudio\lib\site-packages\torch\utils\cpp_extension.py:411: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
        warnings.warn(msg.format('we could not find ninja.'))
      reading manifest file 'tinycudann.egg-info\SOURCES.txt'
      writing manifest file 'tinycudann.egg-info\SOURCES.txt'
      running build_ext
      C:\Users\Dylan\.conda\envs\nerfstudio\lib\site-packages\torch\utils\cpp_extension.py:813: UserWarning: The detected CUDA version (11.8) has a minor version mismatch with the version that was used to compile PyTorch (11.3). Most likely this shouldn't be a problem.
        warnings.warn(CUDA_MISMATCH_WARN.format(cuda_str_version, torch.version.cuda))
      building 'tinycudann_bindings_86._C' extension
      "C:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Tools\MSVC\14.34.31933\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\Dylan\AppData\Local\Temp\pip-req-build-ywxto29a/include -IC:\Users\Dylan\AppData\Local\Temp\pip-req-build-ywxto29a/dependencies -IC:\Users\Dylan\AppData\Local\Temp\pip-req-build-ywxto29a/dependencies/cutlass/include -IC:\Users\Dylan\AppData\Local\Temp\pip-req-build-ywxto29a/dependencies/cutlass/tools/util/include -IC:\Users\Dylan\AppData\Local\Temp\pip-req-build-ywxto29a/dependencies/fmt/include -IC:\Users\Dylan\.conda\envs\nerfstudio\lib\site-packages\torch\include -IC:\Users\Dylan\.conda\envs\nerfstudio\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\Dylan\.conda\envs\nerfstudio\lib\site-packages\torch\include\TH -IC:\Users\Dylan\.conda\envs\nerfstudio\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include" -IC:\Users\Dylan\.conda\envs\nerfstudio\include -IC:\Users\Dylan\.conda\envs\nerfstudio\Include "-IC:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Tools\MSVC\14.34.31933\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Tools\MSVC\14.34.31933\ATLMFC\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" /EHsc /Tp../../dependencies/fmt/src/format.cc /Fobuild\temp.win-amd64-cpython-38\Release\../../dependencies/fmt/src/format.obj /MD /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /EHsc /std:c++14 -DTCNN_MIN_GPU_ARCH=86 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
      format.cc
      C:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Tools\MSVC\14.34.31933\include\cstdlib(12): fatal error C1083: Cannot open include file: 'math.h': No such file or directory
      error: command 'C:\\Program Files\\Microsoft Visual Studio\\2022\\Enterprise\\VC\\Tools\\MSVC\\14.34.31933\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure

× Encountered error while trying to install package.
╰─> tinycudann

note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
Tobe2d commented 1 year ago

Same here. I am on RTX 4090

theworldisonfire commented 1 year ago

I've managed to get closer than this error log shows. the only remaining issue was the last error, that link.exe failed to launch, which is part of VS. The .wheel issue is new after doing some reinstalls and uninstalls.

I don't think its a cuda issue, it could be a version issue. Though I'm having trouble getting everything to match up inside the nerfstudio environment.

I've tried installing ninja, cmake, updating setup tools, manually updating wheel, as well as doing full reinstalls of VS2022, anaconda, cuda, python, and all related tools. Ive changed up the environment variables through following help articles. Maybe I messed something up there Idk.

Still no Luck though and getting frustrated Ive set it aside for now until maybe someone can help.

mints7 commented 1 year ago

I've managed to get closer than this error log shows. the only remaining issue was the last error, that link.exe failed to launch, which is part of VS. The .wheel issue is new after doing some reinstalls and uninstalls.

I don't think its a cuda issue, it could be a version issue. Though I'm having trouble getting everything to match up inside the nerfstudio environment.

I've tried installing ninja, cmake, updating setup tools, manually updating wheel, as well as doing full reinstalls of VS2022, anaconda, cuda, python, and all related tools. Ive changed up the environment variables through following help articles. Maybe I messed something up there Idk.

Still no Luck though and getting frustrated Ive set it aside for now until maybe someone can help.

It looks like you're using the Windows operating system.I use Ubuntu.I also encountered this problems.I solved it according to the following method. You can try it.

first check“/tiny-cuda-nn-master/dependencies/fmt” and “tiny-cuda-nn-master/dependencies/cutlass” ,Aren't they empty?

if yes,click link"https://github.com/NVlabs/tiny-cuda-nn/tree/master/dependencies" . download "fmt"fmt @ b0c8263" and "cutlass"cutlass @ 1eb6355 . download All contents of the folder.

AND then,place them in the corresponding folder.

if not.....Have a good luck.......

theworldisonfire commented 1 year ago

I've managed to get closer than this error log shows. the only remaining issue was the last error, that link.exe failed to launch, which is part of VS. The .wheel issue is new after doing some reinstalls and uninstalls. I don't think its a cuda issue, it could be a version issue. Though I'm having trouble getting everything to match up inside the nerfstudio environment. I've tried installing ninja, cmake, updating setup tools, manually updating wheel, as well as doing full reinstalls of VS2022, anaconda, cuda, python, and all related tools. Ive changed up the environment variables through following help articles. Maybe I messed something up there Idk. Still no Luck though and getting frustrated Ive set it aside for now until maybe someone can help.

It looks like you're using the Windows operating system.I use Ubuntu.I also encountered this problems.I solved it according to the following method. You can try it.

first check“/tiny-cuda-nn-master/dependencies/fmt” and “tiny-cuda-nn-master/dependencies/cutlass” ,Aren't they empty?

if yes,click link"https://github.com/NVlabs/tiny-cuda-nn/tree/master/dependencies" . download "fmt"fmt @ b0c8263" and "cutlass"cutlass @ 1eb6355 . download All contents of the folder.

AND then,place them in the corresponding folder.

if not.....Have a good luck.......

This is the kind of response I was looking for. Thanks for letting me know what worked for you. I will try this as soon as I have a moment.

thomall commented 1 year ago

I'm struggling to resolve this issue as well, with the install failing at the "fatal error C1083". Similar situation, running Windows 10, trying to install this package as part of the Nerf Studio installation in Conda.

It looks like you're using the Windows operating system.I use Ubuntu.I also encountered this problems.I solved it according to the following method. You can try it.

first check“/tiny-cuda-nn-master/dependencies/fmt” and “tiny-cuda-nn-master/dependencies/cutlass” ,Aren't they empty?

if yes,click link"https://github.com/NVlabs/tiny-cuda-nn/tree/master/dependencies" . download "fmt"fmt @ b0c8263" and "cutlass"cutlass @ 1eb6355 . download All contents of the folder.

AND then,place them in the corresponding folder.

I just tried this and unfortunately I still have the same issue fatal error C1083: Cannot open include file: 'math.h': No such file or directory

Cerf-Volant425 commented 1 year ago

It looks like you're using the Windows operating system.I use Ubuntu.I also encountered this problems.I solved it according to the following method. You can try it.

first check“/tiny-cuda-nn-master/dependencies/fmt” and “tiny-cuda-nn-master/dependencies/cutlass” ,Aren't they empty?

if yes,click link"https://github.com/NVlabs/tiny-cuda-nn/tree/master/dependencies" . download "fmt"fmt @ b0c8263" and "cutlass"cutlass @ 1eb6355 . download All contents of the folder.

AND then,place them in the corresponding folder.

if not.....Have a good luck.......

Thanks for your response but it still not worked for me, I got the error as below:

Building PyTorch extension for tiny-cuda-nn version 1.7
Obtained compute capability 86 from PyTorch
running install
/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
  warnings.warn(
/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/easy_install.py:144: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
  warnings.warn(
running bdist_egg
running egg_info
writing tinycudann.egg-info/PKG-INFO
writing dependency_links to tinycudann.egg-info/dependency_links.txt
writing top-level names to tinycudann.egg-info/top_level.txt
reading manifest file 'tinycudann.egg-info/SOURCES.txt'
writing manifest file 'tinycudann.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
running build_ext
/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/utils/cpp_extension.py:813: UserWarning: The detected CUDA version (11.1) has a minor version mismatch with the version that was used to compile PyTorch (11.3). Most likely this shouldn't be a problem.
  warnings.warn(CUDA_MISMATCH_WARN.format(cuda_str_version, torch.version.cuda))
building 'tinycudann_bindings_86._C' extension
Emitting ninja build file /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/build/temp.linux-x86_64-cpython-38/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/3] c++ -MMD -MF /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/build/temp.linux-x86_64-cpython-38/tinycudann/bindings.o.d -pthread -B /home/evsjtu2/miniconda3/envs/nerfstudio/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/evsjtu2/miniconda3/envs/nerfstudio/include -fPIC -O2 -isystem /home/evsjtu2/miniconda3/envs/nerfstudio/include -fPIC -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/tools/util/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/TH -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/include/python3.8 -c -c /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp -o /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/build/temp.linux-x86_64-cpython-38/tinycudann/bindings.o -std=c++14 -DTCNN_MIN_GPU_ARCH=86 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
FAILED: /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/build/temp.linux-x86_64-cpython-38/tinycudann/bindings.o
c++ -MMD -MF /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/build/temp.linux-x86_64-cpython-38/tinycudann/bindings.o.d -pthread -B /home/evsjtu2/miniconda3/envs/nerfstudio/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/evsjtu2/miniconda3/envs/nerfstudio/include -fPIC -O2 -isystem /home/evsjtu2/miniconda3/envs/nerfstudio/include -fPIC -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/tools/util/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/TH -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/include/python3.8 -c -c /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp -o /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/build/temp.linux-x86_64-cpython-38/tinycudann/bindings.o -std=c++14 -DTCNN_MIN_GPU_ARCH=86 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
In file included from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/ATen/ATen.h:9:0,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/extension.h:4,
                 from /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp:34:
/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/ATen/Context.h:25:67: warning: type attributes ignored after type is already defined [-Wattributes]
 enum class TORCH_API Float32MatmulPrecision {HIGHEST, HIGH, MEDIUM};
                                                                   ^
/home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp: In member function ‘std::tuple<tcnn::cpp::Context, at::Tensor> Module::fwd(at::Tensor, at::Tensor)’:
/home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp:108:35: error: converting to ‘std::tuple<tcnn::cpp::Context, at::Tensor>’ from initializer list would use explicit constructor ‘constexpr std::tuple<_T1, _T2>::tuple(_U1&&, _U2&&) [with _U1 = tcnn::cpp::Context; _U2 = at::Tensor&; <template-parameter-2-3> = void; _T1 = tcnn::cpp::Context; _T2 = at::Tensor]’
   return { std::move(ctx), output };
                                   ^
/home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp: In member function ‘std::tuple<at::Tensor, at::Tensor> Module::bwd(const tcnn::cpp::Context&, at::Tensor, at::Tensor, at::Tensor, at::Tensor)’:
/home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp:169:34: error: converting to ‘std::tuple<at::Tensor, at::Tensor>’ from initializer list would use explicit constructor ‘constexpr std::tuple<_T1, _T2>::tuple(_U1&&, _U2&&) [with _U1 = at::Tensor&; _U2 = at::Tensor&; <template-parameter-2-3> = void; _T1 = at::Tensor; _T2 = at::Tensor]’
   return { dL_dinput, dL_dparams };
                                  ^
/home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp: In member function ‘std::tuple<at::Tensor, at::Tensor, at::Tensor> Module::bwd_bwd_input(const tcnn::cpp::Context&, at::Tensor, at::Tensor, at::Tensor, at::Tensor)’:
/home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp:240:47: error: converting to ‘std::tuple<at::Tensor, at::Tensor, at::Tensor>’ from initializer list would use explicit constructor ‘constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {at::Tensor&, at::Tensor&, at::Tensor&}; <template-parameter-2-2> = void; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’
   return {dL_ddLdoutput, dL_dparams, dL_dinput};
                                               ^
[2/3] /usr/local/cuda/bin/nvcc  -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/tools/util/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/TH -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/include/python3.8 -c -c /home/evsjtu2/yetianxiang/tiny-cuda-nn/src/cutlass_mlp.cu -o /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/src/cutlass_mlp.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -std=c++14 --extended-lambda --expt-relaxed-constexpr -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -Xcompiler=-mf16c -Xcompiler=-Wno-float-conversion -Xcompiler=-fno-strict-aliasing -DTCNN_MIN_GPU_ARCH=86 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include/fmt/core.h(287): warning: unrecognized GCC pragma

/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include/fmt/core.h(287): warning: unrecognized GCC pragma

^@^@[3/3] /usr/local/cuda/bin/nvcc  -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/tools/util/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/TH -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/include/python3.8 -c -c /home/evsjtu2/yetianxiang/tiny-cuda-nn/src/fully_fused_mlp.cu -o /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/src/fully_fused_mlp.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -std=c++14 --extended-lambda --expt-relaxed-constexpr -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -Xcompiler=-mf16c -Xcompiler=-Wno-float-conversion -Xcompiler=-fno-strict-aliasing -DTCNN_MIN_GPU_ARCH=86 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include/fmt/core.h(287): warning: unrecognized GCC pragma

/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include/fmt/core.h(287): warning: unrecognized GCC pragma

ninja: build stopped: subcommand failed.
Traceback (most recent call last):
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1808, in _run_ninja_build
    subprocess.run(
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/subprocess.py", line 516, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "setup.py", line 127, in <module>
    setup(
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/__init__.py", line 87, in setup
    return distutils.core.setup(**attrs)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 185, in setup
    return run_commands(dist)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
    dist.run_commands()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
    self.run_command(cmd)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/dist.py", line 1208, in run_command
    super().run_command(command)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
    cmd_obj.run()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/install.py", line 74, in run
    self.do_egg_install()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/install.py", line 123, in do_egg_install
    self.run_command('bdist_egg')
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
    self.distribution.run_command(command)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/dist.py", line 1208, in run_command
    super().run_command(command)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
    cmd_obj.run()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/bdist_egg.py", line 165, in run
    cmd = self.call_command('install_lib', warn_dir=0)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/bdist_egg.py", line 151, in call_command
    self.run_command(cmdname)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
    self.distribution.run_command(command)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/dist.py", line 1208, in run_command
    super().run_command(command)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
    cmd_obj.run()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/install_lib.py", line 11, in run
    self.build()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/command/install_lib.py", line 112, in build
    self.run_command('build_ext')
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
    self.distribution.run_command(command)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/dist.py", line 1208, in run_command
    super().run_command(command)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
    cmd_obj.run()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 84, in run
    _build_ext.run(self)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 346, in run
    self.build_extensions()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 765, in build_extensions
    build_ext.build_extensions(self)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 468, in build_extensions
    self._build_extensions_serial()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 494, in _build_extensions_serial
    self.build_extension(ext)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 246, in build_extension
    _build_ext.build_extension(self, ext)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 549, in build_extension
    objects = self.compiler.compile(
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 586, in unix_wrap_ninja_compile
    _write_ninja_file_and_compile_objects(
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1487, in _write_ninja_file_and_compile_objects
    _run_ninja_build(
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1824, in _run_ninja_build
    raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension
andzejsp commented 1 year ago

this is just garbage

avrum commented 1 year ago

Anyone solve this issue? Having the same error:

E:\Program Files\CUDA\v11.7\include\crt/host_config.h(231): fatal error C1083: Cannot open include file: 'crtdefs.h': No such file or directory

      cpp_api.cu

      ninja: build stopped: subcommand failed.
pkunliu commented 1 year ago

I've managed to get closer than this error log shows. the only remaining issue was the last error, that link.exe failed to launch, which is part of VS. The .wheel issue is new after doing some reinstalls and uninstalls. I don't think its a cuda issue, it could be a version issue. Though I'm having trouble getting everything to match up inside the nerfstudio environment. I've tried installing ninja, cmake, updating setup tools, manually updating wheel, as well as doing full reinstalls of VS2022, anaconda, cuda, python, and all related tools. Ive changed up the environment variables through following help articles. Maybe I messed something up there Idk. Still no Luck though and getting frustrated Ive set it aside for now until maybe someone can help.

It looks like you're using the Windows operating system.I use Ubuntu.I also encountered this problems.I solved it according to the following method. You can try it.

first check“/tiny-cuda-nn-master/dependencies/fmt” and “tiny-cuda-nn-master/dependencies/cutlass” ,Aren't they empty?

if yes,click link"https://github.com/NVlabs/tiny-cuda-nn/tree/master/dependencies" . download "fmt"fmt @ b0c8263" and "cutlass"cutlass @ 1eb6355 . download All contents of the folder.

AND then,place them in the corresponding folder.

if not.....Have a good luck.......

Wow!!! So good!

jike5 commented 1 year ago

I've managed to get closer than this error log shows. the only remaining issue was the last error, that link.exe failed to launch, which is part of VS. The .wheel issue is new after doing some reinstalls and uninstalls. I don't think its a cuda issue, it could be a version issue. Though I'm having trouble getting everything to match up inside the nerfstudio environment. I've tried installing ninja, cmake, updating setup tools, manually updating wheel, as well as doing full reinstalls of VS2022, anaconda, cuda, python, and all related tools. Ive changed up the environment variables through following help articles. Maybe I messed something up there Idk. Still no Luck though and getting frustrated Ive set it aside for now until maybe someone can help.

It looks like you're using the Windows operating system.I use Ubuntu.I also encountered this problems.I solved it according to the following method. You can try it.

first check“/tiny-cuda-nn-master/dependencies/fmt” and “tiny-cuda-nn-master/dependencies/cutlass” ,Aren't they empty?

if yes,click link"https://github.com/NVlabs/tiny-cuda-nn/tree/master/dependencies" . download "fmt"fmt @ b0c8263" and "cutlass"cutlass @ 1eb6355 . download All contents of the folder.

AND then,place them in the corresponding folder.

if not.....Have a good luck.......

Works for me! Thanks!

dylanhu7 commented 1 year ago

I'm on Ubuntu. CUDA binaries (perhaps specifically nvcc) weren't in my PATH so I had to add them with PATH=/usr/local/cuda-11/bin:$PATH

LiXinghui-666 commented 1 year ago

It looks like you're using the Windows operating system.I use Ubuntu.I also encountered this problems.I solved it according to the following method. You can try it. first check“/tiny-cuda-nn-master/dependencies/fmt” and “tiny-cuda-nn-master/dependencies/cutlass” ,Aren't they empty? if yes,click link"https://github.com/NVlabs/tiny-cuda-nn/tree/master/dependencies" . download "fmt"fmt @ b0c8263" and "cutlass"cutlass @ 1eb6355 . download All contents of the folder. AND then,place them in the corresponding folder. if not.....Have a good luck.......

Thanks for your response but it still not worked for me, I got the error as below:

Building PyTorch extension for tiny-cuda-nn version 1.7
Obtained compute capability 86 from PyTorch
running install
/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
  warnings.warn(
/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/easy_install.py:144: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
  warnings.warn(
running bdist_egg
running egg_info
writing tinycudann.egg-info/PKG-INFO
writing dependency_links to tinycudann.egg-info/dependency_links.txt
writing top-level names to tinycudann.egg-info/top_level.txt
reading manifest file 'tinycudann.egg-info/SOURCES.txt'
writing manifest file 'tinycudann.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
running build_ext
/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/utils/cpp_extension.py:813: UserWarning: The detected CUDA version (11.1) has a minor version mismatch with the version that was used to compile PyTorch (11.3). Most likely this shouldn't be a problem.
  warnings.warn(CUDA_MISMATCH_WARN.format(cuda_str_version, torch.version.cuda))
building 'tinycudann_bindings_86._C' extension
Emitting ninja build file /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/build/temp.linux-x86_64-cpython-38/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/3] c++ -MMD -MF /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/build/temp.linux-x86_64-cpython-38/tinycudann/bindings.o.d -pthread -B /home/evsjtu2/miniconda3/envs/nerfstudio/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/evsjtu2/miniconda3/envs/nerfstudio/include -fPIC -O2 -isystem /home/evsjtu2/miniconda3/envs/nerfstudio/include -fPIC -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/tools/util/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/TH -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/include/python3.8 -c -c /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp -o /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/build/temp.linux-x86_64-cpython-38/tinycudann/bindings.o -std=c++14 -DTCNN_MIN_GPU_ARCH=86 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
FAILED: /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/build/temp.linux-x86_64-cpython-38/tinycudann/bindings.o
c++ -MMD -MF /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/build/temp.linux-x86_64-cpython-38/tinycudann/bindings.o.d -pthread -B /home/evsjtu2/miniconda3/envs/nerfstudio/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/evsjtu2/miniconda3/envs/nerfstudio/include -fPIC -O2 -isystem /home/evsjtu2/miniconda3/envs/nerfstudio/include -fPIC -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/tools/util/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/TH -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/include/python3.8 -c -c /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp -o /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/build/temp.linux-x86_64-cpython-38/tinycudann/bindings.o -std=c++14 -DTCNN_MIN_GPU_ARCH=86 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
In file included from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/ATen/ATen.h:9:0,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/extension.h:4,
                 from /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp:34:
/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/ATen/Context.h:25:67: warning: type attributes ignored after type is already defined [-Wattributes]
 enum class TORCH_API Float32MatmulPrecision {HIGHEST, HIGH, MEDIUM};
                                                                   ^
/home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp: In member function ‘std::tuple<tcnn::cpp::Context, at::Tensor> Module::fwd(at::Tensor, at::Tensor)’:
/home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp:108:35: error: converting to ‘std::tuple<tcnn::cpp::Context, at::Tensor>’ from initializer list would use explicit constructor ‘constexpr std::tuple<_T1, _T2>::tuple(_U1&&, _U2&&) [with _U1 = tcnn::cpp::Context; _U2 = at::Tensor&; <template-parameter-2-3> = void; _T1 = tcnn::cpp::Context; _T2 = at::Tensor]’
   return { std::move(ctx), output };
                                   ^
/home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp: In member function ‘std::tuple<at::Tensor, at::Tensor> Module::bwd(const tcnn::cpp::Context&, at::Tensor, at::Tensor, at::Tensor, at::Tensor)’:
/home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp:169:34: error: converting to ‘std::tuple<at::Tensor, at::Tensor>’ from initializer list would use explicit constructor ‘constexpr std::tuple<_T1, _T2>::tuple(_U1&&, _U2&&) [with _U1 = at::Tensor&; _U2 = at::Tensor&; <template-parameter-2-3> = void; _T1 = at::Tensor; _T2 = at::Tensor]’
   return { dL_dinput, dL_dparams };
                                  ^
/home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp: In member function ‘std::tuple<at::Tensor, at::Tensor, at::Tensor> Module::bwd_bwd_input(const tcnn::cpp::Context&, at::Tensor, at::Tensor, at::Tensor, at::Tensor)’:
/home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp:240:47: error: converting to ‘std::tuple<at::Tensor, at::Tensor, at::Tensor>’ from initializer list would use explicit constructor ‘constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {at::Tensor&, at::Tensor&, at::Tensor&}; <template-parameter-2-2> = void; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’
   return {dL_ddLdoutput, dL_dparams, dL_dinput};
                                               ^
[2/3] /usr/local/cuda/bin/nvcc  -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/tools/util/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/TH -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/include/python3.8 -c -c /home/evsjtu2/yetianxiang/tiny-cuda-nn/src/cutlass_mlp.cu -o /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/src/cutlass_mlp.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -std=c++14 --extended-lambda --expt-relaxed-constexpr -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -Xcompiler=-mf16c -Xcompiler=-Wno-float-conversion -Xcompiler=-fno-strict-aliasing -DTCNN_MIN_GPU_ARCH=86 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include/fmt/core.h(287): warning: unrecognized GCC pragma

/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include/fmt/core.h(287): warning: unrecognized GCC pragma

^@^@[3/3] /usr/local/cuda/bin/nvcc  -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/tools/util/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/TH -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/include/python3.8 -c -c /home/evsjtu2/yetianxiang/tiny-cuda-nn/src/fully_fused_mlp.cu -o /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/src/fully_fused_mlp.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -std=c++14 --extended-lambda --expt-relaxed-constexpr -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -Xcompiler=-mf16c -Xcompiler=-Wno-float-conversion -Xcompiler=-fno-strict-aliasing -DTCNN_MIN_GPU_ARCH=86 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include/fmt/core.h(287): warning: unrecognized GCC pragma

/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include/fmt/core.h(287): warning: unrecognized GCC pragma

ninja: build stopped: subcommand failed.
Traceback (most recent call last):
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1808, in _run_ninja_build
    subprocess.run(
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/subprocess.py", line 516, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "setup.py", line 127, in <module>
    setup(
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/__init__.py", line 87, in setup
    return distutils.core.setup(**attrs)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 185, in setup
    return run_commands(dist)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
    dist.run_commands()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
    self.run_command(cmd)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/dist.py", line 1208, in run_command
    super().run_command(command)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
    cmd_obj.run()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/install.py", line 74, in run
    self.do_egg_install()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/install.py", line 123, in do_egg_install
    self.run_command('bdist_egg')
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
    self.distribution.run_command(command)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/dist.py", line 1208, in run_command
    super().run_command(command)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
    cmd_obj.run()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/bdist_egg.py", line 165, in run
    cmd = self.call_command('install_lib', warn_dir=0)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/bdist_egg.py", line 151, in call_command
    self.run_command(cmdname)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
    self.distribution.run_command(command)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/dist.py", line 1208, in run_command
    super().run_command(command)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
    cmd_obj.run()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/install_lib.py", line 11, in run
    self.build()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/command/install_lib.py", line 112, in build
    self.run_command('build_ext')
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
    self.distribution.run_command(command)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/dist.py", line 1208, in run_command
    super().run_command(command)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
    cmd_obj.run()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 84, in run
    _build_ext.run(self)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 346, in run
    self.build_extensions()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 765, in build_extensions
    build_ext.build_extensions(self)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 468, in build_extensions
    self._build_extensions_serial()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 494, in _build_extensions_serial
    self.build_extension(ext)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 246, in build_extension
    _build_ext.build_extension(self, ext)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 549, in build_extension
    objects = self.compiler.compile(
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 586, in unix_wrap_ninja_compile
    _write_ninja_file_and_compile_objects(
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1487, in _write_ninja_file_and_compile_objects
    _run_ninja_build(
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1824, in _run_ninja_build
    raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension

I have the same problem as you, it has been bothering me, do you solve it?

andzejsp commented 1 year ago

no, devs dont want to solve it

smtabatabaie commented 1 year ago

I'm also trying to install it for nerfstudio in windows 10 and anaconda, I get the following errors when the following command reaches setup.py pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

here's part of my error:

ninja: build stopped: subcommand failed.
      Traceback (most recent call last):
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\torch\utils\cpp_extension.py", line 1893, in _run_ninja_build
          subprocess.run(
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\subprocess.py", line 516, in run
          raise CalledProcessError(retcode, process.args,
      subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

      The above exception was the direct cause of the following exception:

      Traceback (most recent call last):
        File "<string>", line 2, in <module>
        File "<pip-setuptools-caller>", line 34, in <module>
        File "C:\Users\tabat\AppData\Local\Temp\pip-req-build-5hok1m9c\bindings/torch\setup.py", line 174, in <module>
          setup(
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\__init__.py", line 87, in setup
          return distutils.core.setup(**attrs)
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\_distutils\core.py", line 185, in setup
          return run_commands(dist)
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\_distutils\core.py", line 201, in run_commands
          dist.run_commands()
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\_distutils\dist.py", line 969, in run_commands
          self.run_command(cmd)
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\dist.py", line 1208, in run_command
          super().run_command(command)
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
          cmd_obj.run()
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\wheel\bdist_wheel.py", line 325, in run
          self.run_command("build")
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\_distutils\cmd.py", line 318, in run_command
          self.distribution.run_command(command)
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\dist.py", line 1208, in run_command
          super().run_command(command)
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
          cmd_obj.run()
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\_distutils\command\build.py", line 132, in run
          self.run_command(cmd_name)
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\_distutils\cmd.py", line 318, in run_command
          self.distribution.run_command(command)
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\dist.py", line 1208, in run_command
          super().run_command(command)
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
          cmd_obj.run()
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\command\build_ext.py", line 84, in run
          _build_ext.run(self)
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\_distutils\command\build_ext.py", line 346, in run
          self.build_extensions()
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\torch\utils\cpp_extension.py", line 843, in build_extensions
          build_ext.build_extensions(self)
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\_distutils\command\build_ext.py", line 468, in build_extensions
          self._build_extensions_serial()
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\_distutils\command\build_ext.py", line 494, in _build_extensions_serial
          self.build_extension(ext)
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\command\build_ext.py", line 246, in build_extension
          _build_ext.build_extension(self, ext)
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\_distutils\command\build_ext.py", line 549, in build_extension
          objects = self.compiler.compile(
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\torch\utils\cpp_extension.py", line 815, in win_wrap_ninja_compile
          _write_ninja_file_and_compile_objects(
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\torch\utils\cpp_extension.py", line 1574, in _write_ninja_file_and_compile_objects
          _run_ninja_build(
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\torch\utils\cpp_extension.py", line 1909, in _run_ninja_build
          raise RuntimeError(message) from e
      RuntimeError: Error compiling objects for extension
      [end of output]

Would really appreciate if someone can help, I've been stuck for more than a week.

wtj-zhong commented 1 year ago

i found the same problem when i used pip install ninja git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch i find: pip install ninja git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch Cloning https://github.com/NVlabs/tiny-cuda-nn/ to /tmp/pip-req-build-rb1bwfp5 Running command git clone --filter=blob:none --quiet https://github.com/NVlabs/tiny-cuda-nn/ /tmp/pip-req-build-rb1bwfp5 Resolved https://github.com/NVlabs/tiny-cuda-nn/ to commit a77dc53ed770dd8ea6f78951d5febe175d0045e9 Running command git submodule update --init --recursive -q Preparing metadata (setup.py) ... done Collecting ninja Using cached https://pypi.tuna.tsinghua.edu.cn/packages/0f/58/854ce5aab0ff5c33d66e1341b0be42f0330797335011880f7fbd88449996/ninja-1.11.1-py2.py3-none-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (145 kB) Building wheels for collected packages: tinycudann Building wheel for tinycudann (setup.py) ... error error: subprocess-exited-with-error

× python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─> [153 lines of output]

note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for tinycudann Running setup.py clean for tinycudann Failed to build tinycudann ERROR: Could not build wheels for tinycudann, which is required to install pyproject.toml-based projects

andzejsp commented 1 year ago

Forget about it, devs dont care, they have secret unreleased package that they don't share.. waste of time

On Tue, May 30, 2023, 10:20 WangTianJiao @.***> wrote:

i found the same problem when i used pip install ninja git+ https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch i find: pip install ninja git+ https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting git+ https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch Cloning https://github.com/NVlabs/tiny-cuda-nn/ to /tmp/pip-req-build-rb1bwfp5 Running command git clone --filter=blob:none --quiet https://github.com/NVlabs/tiny-cuda-nn/ /tmp/pip-req-build-rb1bwfp5 Resolved https://github.com/NVlabs/tiny-cuda-nn/ to commit a77dc53 https://github.com/NVlabs/tiny-cuda-nn/commit/a77dc53ed770dd8ea6f78951d5febe175d0045e9 Running command git submodule update --init --recursive -q Preparing metadata (setup.py) ... done Collecting ninja Using cached https://pypi.tuna.tsinghua.edu.cn/packages/0f/58/854ce5aab0ff5c33d66e1341b0be42f0330797335011880f7fbd88449996/ninja-1.11.1-py2.py3-none-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (145 kB) Building wheels for collected packages: tinycudann Building wheel for tinycudann (setup.py) ... error error: subprocess-exited-with-error

× python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─> [153 lines of output]

note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for tinycudann Running setup.py clean for tinycudann Failed to build tinycudann ERROR: Could not build wheels for tinycudann, which is required to install pyproject.toml-based projects

— Reply to this email directly, view it on GitHub https://github.com/NVlabs/tiny-cuda-nn/issues/208#issuecomment-1567898986, or unsubscribe https://github.com/notifications/unsubscribe-auth/ATFPK6ZA5GRDSRSN5L4R3S3XIWNUPANCNFSM6AAAAAASO7ONAQ . You are receiving this because you commented.Message ID: @.***>

smtabatabaie commented 1 year ago

I'll switch from windows to try my chances with Ubuntu

lindseysMT commented 1 year ago

What I ended up doing - if it helps anyone - is the following. Some personal spec information:

OS: Windows Graphics Card: A6000 Command Prompt: Anaconda

In case it's not already installed, make sure you run:

conda install git

(so you can install git repos from command prompts)

I was getting some misleading errors about PATH variables, so ran: conda install -c conda-forge cudatoolkit-dev

Added system environment variable TCNN_CUDA_ARCHITECTURES and value was based on this table here: https://developer.nvidia.com/cuda-gpus

(ex: I have a A6000, so I have a value of 86 - remember to take out the decimal)

After this, ran the command in the install instructions: pip install ninja git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

Build goes as successful! So to test, running the nerfacto test - and it runs! I hope this helps anyone with similar issues.

Tobe2d commented 1 year ago

@lindseysMT when you mentioned: Added system environment variable TCNN_CUDA_ARCHITECTURES

How to do this? In my case its 89 but how to do this step?

lindseysMT commented 1 year ago

@Tobe2d

  1. Go to your system properties
  2. Go to Environment Variables
  3. Under "System variables" click "New"
  4. Fill out like attached image
  5. Click OK

Should be good to go!

Capture
Tobe2d commented 1 year ago

@lindseysMT Thank you so much!

smtabatabaie commented 1 year ago

What I ended up doing - if it helps anyone - is the following. Some personal spec information:

OS: Windows Graphics Card: A6000 Command Prompt: Anaconda

In case it's not already installed, make sure you run:

conda install git

(so you can install git repos from command prompts)

I was getting some misleading errors about PATH variables, so ran: conda install -c conda-forge cudatoolkit-dev

Added system environment variable TCNN_CUDA_ARCHITECTURES and value was based on this table here: https://developer.nvidia.com/cuda-gpus

(ex: I have a A6000, so I have a value of 86 - remember to take out the decimal)

After this, ran the command in the install instructions: pip install ninja git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

Build goes as successful! So to test, running the nerfacto test - and it runs! I hope this helps anyone with similar issues.

Mine still fails with the same errors, but I could install and run nerfstudio without problems in Ubuntu

lindseysMT commented 1 year ago

@smtabatabaie I'm glad it works with Ubuntu - from my PATH errors, it seemed to be looking for a system variable that the cudatoolkit added, which cleared up on my end.

Askejm commented 1 year ago

I think I solved it after installing vs2022 build tools and making sure vs2019 build tools was uninstalled.

Some other things I also did which may have contributed: Fixed my path variables C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\libnvvp Added TCNN_CUDA_ARCHITECTURES variables and installed cudatoolkit-dev (as mentioned my @Tobe2d) Installed Ninja through conda

nbourre commented 1 year ago

Ok, I've followed all these extra steps after a failed execution of pip install ninja git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

  1. Uninstalled all the build tools which are not vs2022.
  2. as per @Tobe2d, I added the environment variable TCNN_CUDA_ARCHITECTURES
  3. I made sure that ...\v11.8\bin and ...\v11.8\lbnvvp were in the PATH variable
  4. I've modified the CUDA_HOME environment variable to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8
    • I had previous cuda installation
  5. Reran the pip install ninja git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch command

And it worked.

acecross commented 1 year ago

Sadly none of the above worked for me. For some reason, nvcc and cl compilers couldn't access my PATH variable. My fix was to edit the setup.py and deliver all necessary includes and libs as compiler flags:

    base_cflags = ["/std:c++14",
    r'-IC:\Program Files (x86)\Windows Kits\10\Include\10.0.20348.0\ucrt',
    r'-IC:\Program Files (x86)\Windows Kits\10\Include\10.0.20348.0\shared',
    r'-IC:\Program Files (x86)\Windows Kits\10\Include\10.0.20348.0\um',
    r'-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.36.32532\include',
    ]
base_nvcc_flags = [
    r"-I C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.36.32532\include",
    r"-I C:\Program Files (x86)\Windows Kits\10\Include\10.0.20348.0\ucrt",
    r"-I C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.36.32532\include",
    "-std=c++14",
    "--extended-lambda",
    "--expt-relaxed-constexpr",
    # The following definitions must be undefined
    # since TCNN requires half-precision operation.
    "-U__CUDA_NO_HALF_OPERATORS__",
    "-U__CUDA_NO_HALF_CONVERSIONS__",
    "-U__CUDA_NO_HALF2_OPERATORS__",
]
link_flags = [r'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.36.32532\lib\x64',
r'C:\Program Files (x86)\Windows Kits\10\Lib\10.0.20348.0\um\x64',
r'C:\Program Files (x86)\Windows Kits\10\Lib\10.0.20348.0\ucrt\x64',
]
def make_extension(compute_capability):
    nvcc_flags = base_nvcc_flags + [f"-gencode=arch=compute_{compute_capability},code={code}_{compute_capability}" for code in ["compute", "sm"]]
    definitions = base_definitions + [f"-DTCNN_MIN_GPU_ARCH={compute_capability}"]

    if include_networks and compute_capability > 70:
        source_files = base_source_files + ["../../src/fully_fused_mlp.cu"]
    else:
        source_files = base_source_files

    nvcc_flags = nvcc_flags + definitions
    cflags = base_cflags + definitions

    ext = CUDAExtension(
        name=f"tinycudann_bindings._{compute_capability}_C",
        sources=source_files,
        include_dirs=[
            "%s/include" % root_dir,
            "%s/dependencies" % root_dir,
            "%s/dependencies/cutlass/include" % root_dir,
            "%s/dependencies/cutlass/tools/util/include" % root_dir,
            "%s/dependencies/fmt/include" % root_dir,
        ],
        extra_compile_args={"cxx": cflags, "nvcc": nvcc_flags},
        libraries=["cuda", ],
        library_dirs=link_flags,
    )
    return ext

This might not be a beautiful fix and there is probably an easier way to collect these paths but it worked for me. Note that link_flags was created by me while cflagsand nvcc_flagswere only extended. Remember to adjust the paths to your corresponding visual studio version.

Turmac commented 1 year ago

I solved this by:

  1. Completely uninstall VS 2022 and CUDA.
  2. Install VS 2019.
  3. Install CUDA. Then install tinycuda.
andzejsp commented 1 year ago

this is retarded that you have to pretty much nuke your system just to use this. couldnt they have built this in virtual env? maybe conda? and build binaries in the env?

it saddens me that even tho this much time have passed, people still struggle with this.

GeorgeProfenzaD3 commented 1 year ago

I've also stumbled on the same issue: can't seem seem to find <cassert> and <crtdefs>.

I did follow advices from above:

 where cl
C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.37.32822\bin\Hostx64\x64\cl.exe

This is are the errors I experience using pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

...
  C:\Users\george.profenza\.pyenv\pyenv-win\versions\miniconda3\envs\nerfstudio\lib\site-packages\torch\include\c10/macros/Macros.h(3): fatal error C1083: Cannot open include file: 'cassert': No such file or directory
  C:\Users\george.profenza\AppData\Local\Temp\pip-req-build-p9s3io0c/dependencies/fmt/include\fmt/os.h(11): fatal error C1083: Cannot open include file: 'cerrno': No such file or directory
  C:\Users\george.profenza\AppData\Local\Temp\pip-req-build-p9s3io0c/dependencies/fmt/include\fmt/format-inl.h(11): fatal error C1083: Cannot open include file: 'algorithm': No such file or directory
  C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include\crt/host_config.h(231): fatal error C1083: Cannot open include file: 'crtdefs.h': No such file or directory
  ninja: build stopped: subcommand failed.
  Traceback (most recent call last):
    File "C:\Users\george.profenza\.pyenv\pyenv-win\versions\miniconda3\envs\nerfstudio\lib\site-packages\torch\utils\cpp_extension.py", line 1893, in _run_ninja_build
      subprocess.run(
    File "C:\Users\george.profenza\.pyenv\pyenv-win\versions\miniconda3\envs\nerfstudio\lib\subprocess.py", line 516, in run
      raise CalledProcessError(retcode, process.args,
  subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

This is a conda environment with CUDA 11.8 and PyTorch 2.0.1+cu118 installed.

I've also tried cloning the repo recursively to ensure I'm using the right commits (as per @mints7 's advice):

These match the latest commit from the main branch, so unsurprisingly I'm seeing the same errors:

...
C:\Users\george.profenza\.pyenv\pyenv-win\versions\miniconda3\envs\nerfstudio\lib\site-packages\torch\include\c10/macros/Macros.h(3): fatal error C1083: Cannot open include file: 'cassert': No such file or directory
C:\tiny-cuda-nn/dependencies/fmt/include\fmt/os.h(11): fatal error C1083: Cannot open include file: 'cerrno': No such file or directory
C:\tiny-cuda-nn/dependencies/fmt/include\fmt/format-inl.h(11): fatal error C1083: Cannot open include file: 'algorithm': No such file or directory
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include\crt/host_config.h(231): fatal error C1083: Cannot open include file: 'crtdefs.h': No such file or directory
encoding.cu

I did double check I have the "ingredients":

C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\libnvvp
C:\Windows\System32
C:\Windows
C:\Windows\System32\wbem
C:\Windows\System32\OpenSSH
C:\Program Files\NVIDIA Corporation\Nsight Compute 2022.3.0\
C:\Program Files\dotnet\
C:\Users\george.profenza\AppData\Local\Programs\Microsoft VS Code\bin
C:\Users\george.profenza\.pyenv\pyenv-win\bin
C:\Program Files\Git\bin
C:\Users\george.profenza\AppData\Local\GitHubDesktop\bin
C:\Users\george.profenza\AppData\Local\Microsoft\WindowsApps
C:\Program Files\ImageMagick-7.1.0-Q16-HDRI
C:\Users\george.profenza\.pyenv\pyenv-win\versions\miniconda3\condabin
C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.37.32822\bin\Hostx64\x64
C:\Program Files\CMake\bin
C:\COLMAP-3.8-windows-cuda
C:\Users\george.profenza\.dotnet\tools

2023-09-03 10_36_50-cassert - Everything 2023-09-03 10_37_47-crtdef - Everything

@Tom94 I can imagine you and your team must be super busy and I appreciate sharing all this wonderful code with ready to go samples. However, I could use a few hints/tips/RTFM links/etc. to get over this hump. Any hints on what I might be missing ? (Tweaking the scripts to add explicit paths to the headers feels hacky and I thought I'd double check)

Thank you so much, George

GeorgeProfenzaD3 commented 1 year ago

@Askejm Can you please elaborate on your opinion above ? (Maybe my outputs are too verbose ? 🤷 😅 ) Perhaps I'm missing something ? I've tried your suggestions (CUDA 11.8 libnvvp and bin folder are added to PATH and TCNN_CUDA_ARCHITECTURES is set (to 86 in my case) in System environment variables).

I found this comment: disabling ninja gets me to the algorithm errors others experienced:

python setup.py build
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:41:10_Pacific_Daylight_Time_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0
Building PyTorch extension for tiny-cuda-nn version 1.7
Obtained compute capabilities [86] from environment variable TCNN_CUDA_ARCHITECTURES
Detected CUDA version 11.8
Targeting C++ standard 17
running build
running build_py
running egg_info
writing tinycudann.egg-info\PKG-INFO
writing dependency_links to tinycudann.egg-info\dependency_links.txt
writing top-level names to tinycudann.egg-info\top_level.txt
reading manifest file 'tinycudann.egg-info\SOURCES.txt'
writing manifest file 'tinycudann.egg-info\SOURCES.txt'
running build_ext
building 'tinycudann_bindings._86_C' extension
"C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.37.32822\bin\Hostx64\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\tiny-cuda-nn/include -IC:\tiny-cuda-nn/dependencies -IC:\tiny-cuda-nn/dependencies/cutlass/include -IC:\tiny-cuda-nn/dependencies/cutlass/tools/util/include -IC:\tiny-cuda-nn/dependencies/fmt/include -IC:\Users\george.profenza\.pyenv\pyenv-win\versions\miniconda3\envs\nerfstudio\lib\site-packages\torch\include -IC:\Users\george.profenza\.pyenv\pyenv-win\versions\miniconda3\envs\nerfstudio\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\george.profenza\.pyenv\pyenv-win\versions\miniconda3\envs\nerfstudio\lib\site-packages\torch\include\TH -IC:\Users\george.profenza\.pyenv\pyenv-win\versions\miniconda3\envs\nerfstudio\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include" -IC:\Users\george.profenza\.pyenv\pyenv-win\versions\miniconda3\envs\nerfstudio\include -IC:\Users\george.profenza\.pyenv\pyenv-win\versions\miniconda3\envs\nerfstudio\Include /EHsc /Tp../../dependencies/fmt/src/format.cc /Fobuild\temp.win-amd64-cpython-38\Release\../../dependencies/fmt/src/format.obj /MD /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /EHsc /std:c++17 -DTCNN_PARAMS_UNALIGNED -DTCNN_MIN_GPU_ARCH=86 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_86_C -D_GLIBCXX_USE_CXX11_ABI=0
format.cc
C:\tiny-cuda-nn/dependencies/fmt/include\fmt/format-inl.h(11): fatal error C1083: Cannot open include file: 'algorithm': No such file or directory
setup.py:5: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
  from pkg_resources import parse_version
error: command 'C:\\Program Files\\Microsoft Visual Studio\\2022\\Professional\\VC\\Tools\\MSVC\\14.37.32822\\bin\\Hostx64\\x64\\cl.exe' failed with exit code 2
(nerfstudio)

(It's unclear to me which algorithm this refers too (std / boost / absl / etc.) )

Askejm commented 1 year ago

@Askejm Can you please elaborate on your opinion above ? (Maybe my outputs are too verbose ? 🤷 😅 )

Yeah it's generally mildly annoying when someone puts a 50 page error and you have to scroll all the way through it. Typically only the end will be what's useful and sometimes the start, so keep that in mind for the future 👍

GeorgeProfenzaD3 commented 1 year ago

@Askejm Thanks for the feedback: much easier to understand what's going on. (I wrongly assumed the more details I provide the easier it will be to figure out what's going on). I've updated / trimmed my commend above considerably.

I started manually adding MSVC's include path which got me slightly further, but now I'm getting errors about missing Windows SDK headers: C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.37.32822\include\cstdlib(12): fatal error C1083: Cannot open include file: 'math.h': No such file or directory

Update I could finally build by explicitly adding the MSVC and Windows SDK include folders in setup.py:

"C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.37.32822\include"
"C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\ucrt"
"C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\um"
"C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\shared"

On my setup that's MSVC 14.37.32822 and Windows SDK 10.0.22621.0 but that may differ on your system.

This is pretty hacky and I don't recommend it. There should be a way to configure the build tools so the know of MSVC/Windows SDK/etc. Where can I find more info about this ?

Snowad14 commented 1 year ago

This is pretty hacky and I don't recommend it. There should be a way to configure the build tools so the know of MSVC/Windows SDK/etc. Where can I find more info about this ?

yea, just use vcvarsall x86_amd64 fix all the shit

GeorgeProfenzaD3 commented 1 year ago

@Snowad14 Thanks for the the suggestion. I wish it just just worked as you described it:

"C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Auxiliary\Build\vcvarsx86_amd64.bat"
**********************************************************************
** Visual Studio 2022 Developer Command Prompt v17.7.3
** Copyright (c) 2022 Microsoft Corporation
**********************************************************************
[vcvarsall.bat] Environment initialized for: 'x86_x64'
"C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.37.32822\bin\Hostx64\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\blip\tiny-cuda-nn/include -IC:\blip\tiny-cuda-nn/dependencies -IC:\blip\tiny-cuda-nn/dependencies/cutlass/include -IC:\blip\tiny-cuda-nn/dependencies/cutlass/tools/util/include -IC:\blip\tiny-cuda-nn/dependencies/fmt/include -IC:\Users\george.profenza\.pyenv\pyenv-win\versions\miniconda3\envs\neuralangelo\lib\site-packages\torch\include -IC:\Users\george.profenza\.pyenv\pyenv-win\versions\miniconda3\envs\neuralangelo\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\george.profenza\.pyenv\pyenv-win\versions\miniconda3\envs\neuralangelo\lib\site-packages\torch\include\TH -IC:\Users\george.profenza\.pyenv\pyenv-win\versions\miniconda3\envs\neuralangelo\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include" -IC:\Users\george.profenza\.pyenv\pyenv-win\versions\miniconda3\envs\neuralangelo\include -IC:\Users\george.profenza\.pyenv\pyenv-win\versions\miniconda3\envs\neuralangelo\Include /EHsc /Tp../../dependencies/fmt/src/format.cc /Fobuild\temp.win-amd64-cpython-38\Release\../../dependencies/fmt/src/format.obj /MD /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /EHsc /std:c++17 -DTCNN_PARAMS_UNALIGNED -DTCNN_MIN_GPU_ARCH=86 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_86_C -D_GLIBCXX_USE_CXX11_ABI=0
format.cc
C:\blip\tiny-cuda-nn/dependencies/fmt/include\fmt/format-inl.h(11): fatal error C1083: Cannot open include file: 'algorithm': No such file or directory
setup.py:5: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
  from pkg_resources import parse_version
error: command 'C:\\Program Files\\Microsoft Visual Studio\\2022\\Professional\\VC\\Tools\\MSVC\\14.37.32822\\bin\\Hostx64\\x64\\cl.exe' failed with exit code 2

Also tried:

 "C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Auxiliary\Build\vcvarsall.bat" x86_amd64
**********************************************************************
** Visual Studio 2022 Developer Command Prompt v17.7.3
** Copyright (c) 2022 Microsoft Corporation
**********************************************************************
[vcvarsall.bat] Environment initialized for: 'x86_x64'

same error:

format.cc
C:\blip\tiny-cuda-nn/dependencies/fmt/include\fmt/format-inl.h(11): fatal error C1083: Cannot open include file: 'algorithm': No such file or directory

I'm starting to suspect my setup is bit busy, using multiple versions of python and MSVC (14.29.16.11 to 14.34.17.4).

andzejsp commented 1 year ago

9 months in and still not fixed... just garbage

shaulbarlev commented 1 year ago

on ubuntu: sudo apt install build-essential worked for me

kobechenyang commented 1 year ago

@Askejm Thanks for the feedback: much easier to understand what's going on. (I wrongly assumed the more details I provide the easier it will be to figure out what's going on). I've updated / trimmed my commend above considerably.

I started manually adding MSVC's include path which got me slightly further, but now I'm getting errors about missing Windows SDK headers: C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.37.32822\include\cstdlib(12): fatal error C1083: Cannot open include file: 'math.h': No such file or directory

Update I could finally build by explicitly adding the MSVC and Windows SDK include folders in setup.py:

"C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.37.32822\include"
"C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\ucrt"
"C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\um"
"C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\shared"

On my setup that's MSVC 14.37.32822 and Windows SDK 10.0.22621.0 but that may differ on your system.

This is pretty hacky and I don't recommend it. There should be a way to configure the build tools so the know of MSVC/Windows SDK/etc. Where can I find more info about this ?

I am having the same issue. Time to switch to linux

GeorgeProfenzaD3 commented 1 year ago

@kobechenyang FWIW I've attempted to compile a pip wheel and uploaded it here: https://sensori.al/github/tinycudann-1.7-cp38-cp38-win_amd64.whl

This was compiled under a conda environment using Python 3.8.1 with CUDA 11.8 preinstalled (with the hacky hardcoded include paths mentioned above)

This is the version of pytorch installed via conda:

 - pytorch=2.0.1=py3.8_cuda11.8_cudnn8_0
  - pytorch-cuda=11.8=h24eeafa_5
  - pytorch-mutex=1.0=cuda

Unfortunately I won't be able to support this: install at your own risk. Unless the environment matches (same version of Python, CUDA, torch, I suspect it may not work) Simply uninstall if it doesn't work for you.

(Note: This was compiled with CUDA 86 arch, not 89 so you may not get the full performance on a newer GPU)

luishresende commented 12 months ago

I installed nvidia's cuda toolkit directly from the nvidia website, and changed the "CUDA_PATH" environment variable, assigning the cuda toolkit installation path as a parameter, it worked for me. Hope this helps.

Example: CUDA_PATH="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7"

ShaYito commented 11 months ago

Hi guys, I may have found a solution. I remove C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.38.33130\bin\Hostx64\x64(the dir of cl.exe) from my system PATH, and the install succeeded. My setting: CUDA 11.8, Windows 11, RTX4080

(torch-ngp) C:\_piper\torch-ngp>pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
Collecting git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
  Cloning https://github.com/NVlabs/tiny-cuda-nn/ to c:\users\shayi\appdata\local\temp\pip-req-build-seu0rbf4
  Running command git clone --filter=blob:none --quiet https://github.com/NVlabs/tiny-cuda-nn/ 'C:\Users\shayi\AppData\Local\Temp\pip-req-build-seu0rbf4'
  Resolved https://github.com/NVlabs/tiny-cuda-nn/ to commit 212104156403bd87616c1a4f73a1c5f2c2e172a9
  Running command git submodule update --init --recursive -q
  Preparing metadata (setup.py) ... done
Building wheels for collected packages: tinycudann
  Building wheel for tinycudann (setup.py) ... done
  Created wheel for tinycudann: filename=tinycudann-1.7-cp310-cp310-win_amd64.whl size=24755089 sha256=b5b4906671b86b7edd12b6990df9f77534d94e80ea8995f3145ad4ded3188240
  Stored in directory: C:\Users\shayi\AppData\Local\Temp\pip-ephem-wheel-cache-mbdazcoy\wheels\32\d8\5e\dc94eca0794af9e09a6d97f19cf15dfe9bbbc4d56ae4db4aa2
Successfully built tinycudann
Installing collected packages: tinycudann
Successfully installed tinycudann-1.7

Before that, I was getting this error

        File "C:\miniconda3\envs\torch-ngp\lib\site-packages\torch\utils\cpp_extension.py", line 1909, in _run_ninja_build
          raise RuntimeError(message) from e
      RuntimeError: Error compiling objects for extension
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for tinycudann
  Running setup.py clean for tinycudann
Failed to build tinycudann
jackdaus commented 10 months ago

@GeorgeProfenzaD3 this is a bit of a late response, but I'm leaving this info here for others encountering the same issue.

When installing this on Windows, if you've gotten to a point where you get the error, "Cannot open include file: 'algorithm'", then the solution posted in issue 280 may be able to help. In short, you'll need to run the script vcvarsall.bat from INSIDE your activated conda environment. (I had previously overlooked that detail myself.)

I will repost the steps here for convenience for other folks.

  1. Activate your conda environment.
    conda activate nerfstudio
  2. From INSIDE the activated conda environment, run the vcvarsall.bat script to set up the environment variables. For example, I am running the .bat file from Visual Studio 2022. I am compiling for the x64 platform, so I use the x64 argument.
    C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Auxiliary\Build>vcvarsall.bat x64
  3. Now, the install should work!
    pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

Notes:

GeorgeProfenzaD3 commented 10 months ago

@jackdaus That is an important gotcha! Thanks for highlighting that for everybody else as well Jack. (I think it also didn't help that I used a MINGW64 shell either :) )

Yuqi-Miao commented 10 months ago

I've managed to get closer than this error log shows. the only remaining issue was the last error, that link.exe failed to launch, which is part of VS. The .wheel issue is new after doing some reinstalls and uninstalls. I don't think its a cuda issue, it could be a version issue. Though I'm having trouble getting everything to match up inside the nerfstudio environment. I've tried installing ninja, cmake, updating setup tools, manually updating wheel, as well as doing full reinstalls of VS2022, anaconda, cuda, python, and all related tools. Ive changed up the environment variables through following help articles. Maybe I messed something up there Idk. Still no Luck though and getting frustrated Ive set it aside for now until maybe someone can help.

It looks like you're using the Windows operating system.I use Ubuntu.I also encountered this problems.I solved it according to the following method. You can try it.

first check“/tiny-cuda-nn-master/dependencies/fmt” and “tiny-cuda-nn-master/dependencies/cutlass” ,Aren't they empty?

if yes,click link"https://github.com/NVlabs/tiny-cuda-nn/tree/master/dependencies" . download "fmt"fmt @ b0c8263" and "cutlass"cutlass @ 1eb6355 . download All contents of the folder.

AND then,place them in the corresponding folder.

if not.....Have a good luck.......

Thank! WOW It works!!!

MartinNose commented 10 months ago

Hi guys, I may have found a solution. I remove C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.38.33130\bin\Hostx64\x64(the dir of cl.exe) from my system PATH, and the install succeeded. My setting: CUDA 11.8, Windows 11, RTX4080

(torch-ngp) C:\_piper\torch-ngp>pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
Collecting git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
  Cloning https://github.com/NVlabs/tiny-cuda-nn/ to c:\users\shayi\appdata\local\temp\pip-req-build-seu0rbf4
  Running command git clone --filter=blob:none --quiet https://github.com/NVlabs/tiny-cuda-nn/ 'C:\Users\shayi\AppData\Local\Temp\pip-req-build-seu0rbf4'
  Resolved https://github.com/NVlabs/tiny-cuda-nn/ to commit 212104156403bd87616c1a4f73a1c5f2c2e172a9
  Running command git submodule update --init --recursive -q
  Preparing metadata (setup.py) ... done
Building wheels for collected packages: tinycudann
  Building wheel for tinycudann (setup.py) ... done
  Created wheel for tinycudann: filename=tinycudann-1.7-cp310-cp310-win_amd64.whl size=24755089 sha256=b5b4906671b86b7edd12b6990df9f77534d94e80ea8995f3145ad4ded3188240
  Stored in directory: C:\Users\shayi\AppData\Local\Temp\pip-ephem-wheel-cache-mbdazcoy\wheels\32\d8\5e\dc94eca0794af9e09a6d97f19cf15dfe9bbbc4d56ae4db4aa2
Successfully built tinycudann
Installing collected packages: tinycudann
Successfully installed tinycudann-1.7

Before that, I was getting this error

        File "C:\miniconda3\envs\torch-ngp\lib\site-packages\torch\utils\cpp_extension.py", line 1909, in _run_ninja_build
          raise RuntimeError(message) from e
      RuntimeError: Error compiling objects for extension
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for tinycudann
  Running setup.py clean for tinycudann
Failed to build tinycudann

Thank @ShaYito for posting this solution. It works for me!

And my setup is Windows11 with torch-2.0.1 and cuda11.8.

gustaavv commented 10 months ago

I have met the same problem when installing tinycuda alone (not with nerfstudio), where my env is Windows 10 and cuda 11.8.

The solution is running pip install in "x64 Native Tools Command Prompt for VS 2019" of Visual Studio (you can find it by typing the name in windows search)

The original link of this solution: https://zhuanlan.zhihu.com/p/632963291

Thomas-Lei commented 9 months ago

I have met the same problem when installing tinycuda alone (not with nerfstudio), where my env is Windows 10 and cuda 11.8.

The solution is running pip install in "x64 Native Tools Command Prompt for VS 2019" of Visual Studio (you can find it by typing the name in windows search)

The original link of this solution: https://zhuanlan.zhihu.com/p/632963291

I run "python setup.py install" in the x64 native tools prompt, it works. Thanks.

Krysidian commented 9 months ago

I've managed to get closer than this error log shows. the only remaining issue was the last error, that link.exe failed to launch, which is part of VS. The .wheel issue is new after doing some reinstalls and uninstalls. I don't think its a cuda issue, it could be a version issue. Though I'm having trouble getting everything to match up inside the nerfstudio environment. I've tried installing ninja, cmake, updating setup tools, manually updating wheel, as well as doing full reinstalls of VS2022, anaconda, cuda, python, and all related tools. Ive changed up the environment variables through following help articles. Maybe I messed something up there Idk. Still no Luck though and getting frustrated Ive set it aside for now until maybe someone can help.

It looks like you're using the Windows operating system.I use Ubuntu.I also encountered this problems.I solved it according to the following method. You can try it.

first check“/tiny-cuda-nn-master/dependencies/fmt” and “tiny-cuda-nn-master/dependencies/cutlass” ,Aren't they empty?

if yes,click link"https://github.com/NVlabs/tiny-cuda-nn/tree/master/dependencies" . download "fmt"fmt @ b0c8263" and "cutlass"cutlass @ 1eb6355 . download All contents of the folder.

AND then,place them in the corresponding folder.

if not.....Have a good luck.......

I really don't understand where I'm supposed to place this. The command to install tiny-cuda-nn creates a temporary folder in temp that always has a different name and gets deleted as soon as the command finishes so there is nowhere to place these files. What am I supposed to do with this?

ishipachev commented 5 months ago

Another note to the one digging through this topic. Win10 here.

In my case compiling error with the mention of corecrt.h file was caused missed checkmark when I installed VS Studio components as its said in installing instructions. Here is an a thread on SO: https://stackoverflow.com/questions/38290169/cannot-find-corecrt-h-universalcrt-includepath-is-wrong And essentially it narrows down to installing "Windows Universal CRT SDK" component in Visual Studio installer by clicking "modify" the in the current installed batch of VS packages.

After I had another error with windows.h missed, but that one was caused by me intentionally clicking off this component in VS Installer to reduce the space it occupies on disk C. Don't do that, you will actually need this "Windows 10 SDK" and windows.h it brings.

Another tweaks I did before to resolve other issues I've encountered on the building tinycuda for nerfstudio: