Open sandun21 opened 7 months ago
I have never been able to build ME with Cuda 12+. I have tried to do this in official Nvidia docker image. It simply isn't supported AFAIK. You can build with Cuda 11.8 and PyTorch 2.0.1 or 2.1 AFAIK.
I have never been able to build ME with Cuda 12+. I have tried to do this in official Nvidia docker image. It simply isn't supported AFAIK. You can build with Cuda 11.8 and PyTorch 2.0.1 or 2.1 AFAIK.
I build with Cuda 11.8 ,but it had some problem /root/MinkowskiEngine/src/spmm.cu: In instantiation of 'minkowski::coo_spmm(const at::Tensor&, const at::Tensor&, const at::Tensor&, int64_t, int64_t, const at::Tensor&, int64_t, bool)::<lambda()>::<lambda()> [with th_int_type = int]': /root/MinkowskiEngine/src/spmm.cu:203:0: required from 'struct minkowski::coo_spmm(const at::Tensor&, const at::Tensor&, const at::Tensor&, int64_t, int64_t, const at::Tensor&, int64_t, bool)::<lambda()> [with th_int_type = int]::<lambda()>' /root/MinkowskiEngine/src/spmm.cu:203:0: required from 'minkowski::coo_spmm(const at::Tensor&, const at::Tensor&, const at::Tensor&, int64_t, int64_t, const at::Tensor&, int64_t, bool)::<lambda()> [with th_int_type = int]' /root/MinkowskiEngine/src/spmm.cu:203:0: required from 'struct minkowski::coo_spmm(const at::Tensor&, const at::Tensor&, const at::Tensor&, int64_t, int64_t, const at::Tensor&, int64_t, bool) [with th_int_type = int; int64_t = long int]::<lambda()>' /root/MinkowskiEngine/src/spmm.cu:203:0: required from 'at::Tensor minkowski::coo_spmm(const at::Tensor&, const at::Tensor&, const at::Tensor&, int64_t, int64_t, const at::Tensor&, int64_t, bool) [with th_int_type = int; int64_t = long int]' /root/MinkowskiEngine/src/spmm.cu:335:232: required from here /root/MinkowskiEngine/src/spmm.cu:203:1428: internal compiler error: in maybe_undo_parenthesized_ref, at cp/semantics.c:1740 AT_DISPATCH_FLOATING_TYPES(vals.scalar_type(), "coo_spmm", [&] { ^ Please submit a full bug report, with preprocessed source if appropriate. See https://gcc.gnu.org/bugs/ for instructions. error: command '/usr/local/cuda/bin/nvcc' failed with exit code 1 (pts) root@autodl-container-b6e5119aae-8e42a0aa:~/MinkowskiEngine#
When i tried to install Minkowski Engine on my linux machine it gives error as mentioned below. Similar for the library ctcdecode too. I have torch==2.2.0 with cuda 12.1 installed. When I tried to installed with older pytorch and cuda versions it promots the long error message saying mismatch between system and pytorch cuda versions. Thankyou in advance.
Desktop (please complete the following information):
`(RegTR) hasitha@hunnas:~$ pip uninstall minkowskiengine WARNING: Skipping minkowskiengine as it is not installed. (RegTR) hasitha@hunnas:~$ pip install -U git+https://github.com/NVIDIA/MinkowskiEngine --no-deps Collecting git+https://github.com/NVIDIA/MinkowskiEngine Cloning https://github.com/NVIDIA/MinkowskiEngine to /tmp/pip-req-build-nxz_kim5 Running command git clone --filter=blob:none --quiet https://github.com/NVIDIA/MinkowskiEngine /tmp/pip-req-build-nxz_kim5 Resolved https://github.com/NVIDIA/MinkowskiEngine to commit 02fc608bea4c0549b0a7b00ca1bf15dee4a0b228 Preparing metadata (setup.py) ... done Building wheels for collected packages: MinkowskiEngine Building wheel for MinkowskiEngine (setup.py) ... error error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─> [295 lines of output] WARNING: Skipping MinkowskiEngine as it is not installed.
note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for MinkowskiEngine Running setup.py clean for MinkowskiEngine Failed to build MinkowskiEngine ERROR: Could not build wheels for MinkowskiEngine, which is required to install pyproject.toml-based projects (RegTR) hasitha@hunnas:~$ which python /home/hasitha/virtualenvs/virtualenv/RegTR/bin/python (RegTR) hasitha@hunnas:~$ python3 -c "import torch; print(torch.version)" 2.2.0+cu121`