zhanghang1989 / PyTorch-Encoding

A CV toolkit for my papers.
https://hangzhang.org/PyTorch-Encoding/
MIT License
2.04k stars 450 forks source link

gtx3090,cuda=11.0,pytorch=1.7, 报错nvcc fatal : Unsupported gpu architecture 'compute_86' #383

Open zwyking opened 3 years ago

zwyking commented 3 years ago

宁好,我再gtx3090,cuda=11.0,pytorch=1.7的环境下尝试安装,但是报错nvcc fatal : Unsupported gpu architecture 'compute_86',请问这种情况该怎么处理呢?

JoanneZZH commented 3 years ago

宁好,我再gtx3090,cuda=11.0,pytorch=1.7的环境下尝试安装,但是报错nvcc fatal : Unsupported gpu architecture 'compute_86',请问这种情况该怎么处理呢?

Hi, you can try this line: export TORCH_CUDA_ARCH_LIST="7.5" in your comannd window, then to build xxxx. I use this sentence to overcome this problem since 86 is too high for this cuda program.

zwyking commented 3 years ago

thanks for your advice! It works~

------------------ Original ------------------ From: @.>; Send time: Thursday, Apr 8, 2021 10:33 PM @.>; @.>; @.>; Subject:  Re: [zhanghang1989/PyTorch-Encoding] gtx3090,cuda=11.0,pytorch=1.7, 报错nvcc fatal : Unsupported gpu architecture 'compute_86' (#383)

宁好,我再gtx3090,cuda=11.0,pytorch=1.7的环境下尝试安装,但是报错nvcc fatal : Unsupported gpu architecture 'compute_86',请问这种情况该怎么处理呢?

Hi, you can try this line: export TORCH_CUDA_ARCH_LIST="7.5" in your comannd window, then to build xxxx. I use this sentence to overcome this problem since 86 is too high for this cuda program.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.

yjt9299 commented 3 years ago

thanks for your advice! It works~ ------------------ Original ------------------ From: @.>; Send time: Thursday, Apr 8, 2021 10:33 PM @.>; @.>; @.>; Subject:  Re: [zhanghang1989/PyTorch-Encoding] gtx3090,cuda=11.0,pytorch=1.7, 报错nvcc fatal : Unsupported gpu architecture 'compute_86' (#383) 宁好,我再gtx3090,cuda=11.0,pytorch=1.7的环境下尝试安装,但是报错nvcc fatal : Unsupported gpu architecture 'compute_86',请问这种情况该怎么处理呢? Hi, you can try this line: export TORCH_CUDA_ARCH_LIST="7.5" in your comannd window, then to build xxxx. I use this sentence to overcome this problem since 86 is too high for this cuda program. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.

宁好,我再gtx3090,cuda=11.0,pytorch=1.7的环境下尝试安装,但是报错nvcc fatal : Unsupported gpu architecture 'compute_86',请问这种情况该怎么处理呢?

Hi, you can try this line: export TORCH_CUDA_ARCH_LIST="7.5" in your comannd window, then to build xxxx. I use this sentence to overcome this problem since 86 is too high for this cuda program.

I followed this but still can not install。。。。。

yjt9299 commented 3 years ago

copying encoding/version.py -> build/lib.linux-x86_64-3.7/encoding running build_ext building 'encoding.gpu' extension gcc -pthread -B /home/yuejiutao/anaconda3/envs/taming/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/home/yuejiutao/Code/PyTorch-Encoding/encoding/lib/cpu -I/home/yuejiutao/anaconda3/envs/taming/lib/python3.7/site-packages/torch/include -I/home/yuejiutao/anaconda3/envs/taming/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/yuejiutao/anaconda3/envs/taming/lib/python3.7/site-packages/torch/include/TH -I/home/yuejiutao/anaconda3/envs/taming/lib/python3.7/site-packages/torch/include/THC -I/home/yuejiutao/Code/PyTorch-Encoding/encoding/lib/gpu -I/home/yuejiutao/anaconda3/envs/taming/lib/python3.7/site-packages/torch/include -I/home/yuejiutao/anaconda3/envs/taming/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/yuejiutao/anaconda3/envs/taming/lib/python3.7/site-packages/torch/include/TH -I/home/yuejiutao/anaconda3/envs/taming/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/yuejiutao/anaconda3/envs/taming/include/python3.7m -c /home/yuejiutao/Code/PyTorch-Encoding/encoding/lib/gpu/operator.cpp -o build/temp.linux-x86_64-3.7/home/yuejiutao/Code/PyTorch-Encoding/encoding/lib/gpu/operator.o -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=gpu -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ In file included from /home/yuejiutao/anaconda3/envs/taming/lib/python3.7/site-packages/torch/include/ATen/Parallel.h:140, from /home/yuejiutao/anaconda3/envs/taming/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3, from /home/yuejiutao/anaconda3/envs/taming/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5, from /home/yuejiutao/anaconda3/envs/taming/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3, from /home/yuejiutao/anaconda3/envs/taming/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:13, from /home/yuejiutao/anaconda3/envs/taming/lib/python3.7/site-packages/torch/include/torch/extension.h:4, from /home/yuejiutao/Code/PyTorch-Encoding/encoding/lib/gpu/operator.h:1, from /home/yuejiutao/Code/PyTorch-Encoding/encoding/lib/gpu/operator.cpp:1: /home/yuejiutao/anaconda3/envs/taming/lib/python3.7/site-packages/torch/include/ATen/ParallelOpenMP.h:83: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]

pragma omp parallel for if ((end - begin) >= grain_size)

/usr/local/cuda/bin/nvcc -DWITH_CUDA -I/home/yuejiutao/Code/PyTorch-Encoding/encoding/lib/cpu -I/home/yuejiutao/anaconda3/envs/taming/lib/python3.7/site-packages/torch/include -I/home/yuejiutao/anaconda3/envs/taming/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/yuejiutao/anaconda3/envs/taming/lib/python3.7/site-packages/torch/include/TH -I/home/yuejiutao/anaconda3/envs/taming/lib/python3.7/site-packages/torch/include/THC -I/home/yuejiutao/Code/PyTorch-Encoding/encoding/lib/gpu -I/home/yuejiutao/anaconda3/envs/taming/lib/python3.7/site-packages/torch/include -I/home/yuejiutao/anaconda3/envs/taming/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/yuejiutao/anaconda3/envs/taming/lib/python3.7/site-packages/torch/include/TH -I/home/yuejiutao/anaconda3/envs/taming/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/yuejiutao/anaconda3/envs/taming/include/python3.7m -c /home/yuejiutao/Code/PyTorch-Encoding/encoding/lib/gpu/roi_align_kernel.cu -o build/temp.linux-x86_64-3.7/home/yuejiutao/Code/PyTorch-Encoding/encoding/lib/gpu/roi_align_kernel.o -DCUDA_NO_HALF_OPERATORS -DCUDA_NO_HALF_CONVERSIONS -DCUDA_NO_BFLOAT16_CONVERSIONS -DCUDA_NO_HALF2_OPERATORS --expt-relaxed-constexpr --compiler-options '-fPIC' -DCUDA_HAS_FP16=1 -DCUDA_NO_HALF_OPERATORS -DCUDA_NO_HALF_CONVERSIONS -DCUDA_NO_HALF2_OPERATORS -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=gpu -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=sm_75 -std=c++14 /usr/include/c++/8/bits/basic_string.tcc: In instantiation of ‘static std::basic_string<_CharT, _Traits, _Alloc>::_Rep std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_S_create(std::basic_string<_CharT, _Traits, _Alloc>::size_type, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char16_t; _Traits = std::char_traits; _Alloc = std::allocator; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’: /usr/include/c++/8/bits/basic_string.tcc:578:28: required from ‘static _CharT std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&, std::forward_iterator_tag) [with _FwdIterator = const char16_t; _CharT = char16_t; _Traits = std::char_traits; _Alloc = std::allocator]’ /usr/include/c++/8/bits/basic_string.h:5052:20: required from ‘static _CharT std::basic_string<_CharT, _Traits, _Alloc>::_S_construct_aux(_InIterator, _InIterator, const _Alloc&, std::false_type) [with _InIterator = const char16_t; _CharT = char16_t; _Traits = std::char_traits; _Alloc = std::allocator]’ /usr/include/c++/8/bits/basic_string.h:5073:24: required from ‘static _CharT std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&) [with _InIterator = const char16_t; _CharT = char16_t; _Traits = std::char_traits; _Alloc = std::allocator]’ /usr/include/c++/8/bits/basic_string.tcc:656:134: required from ‘std::basic_string<_CharT, _Traits, _Alloc>::basic_string(const _CharT, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char16_t; _Traits = std::char_traits; _Alloc = std::allocator; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’ /usr/include/c++/8/bits/basic_string.h:6725:95: required from here /usr/include/c++/8/bits/basic_string.tcc:1067:1: error: cannot call member function ‘void std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_M_set_sharable() [with _CharT = char16_t; _Traits = std::char_traits; _Alloc = std::allocator]’ without object __p->_M_set_sharable(); ^ ~~~~~ /usr/include/c++/8/bits/basic_string.tcc: In instantiation of ‘static std::basic_string<_CharT, _Traits, _Alloc>::_Rep std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_S_create(std::basic_string<_CharT, _Traits, _Alloc>::size_type, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char32_t; _Traits = std::char_traits; _Alloc = std::allocator; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’: /usr/include/c++/8/bits/basic_string.tcc:578:28: required from ‘static _CharT std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&, std::forward_iterator_tag) [with _FwdIterator = const char32_t; _CharT = char32_t; _Traits = std::char_traits; _Alloc = std::allocator]’ /usr/include/c++/8/bits/basic_string.h:5052:20: required from ‘static _CharT std::basic_string<_CharT, _Traits, _Alloc>::_S_construct_aux(_InIterator, _InIterator, const _Alloc&, std::false_type) [with _InIterator = const char32_t; _CharT = char32_t; _Traits = std::char_traits; _Alloc = std::allocator]’ /usr/include/c++/8/bits/basic_string.h:5073:24: required from ‘static _CharT std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&) [with _InIterator = const char32_t; _CharT = char32_t; _Traits = std::char_traits; _Alloc = std::allocator]’ /usr/include/c++/8/bits/basic_string.tcc:656:134: required from ‘std::basic_string<_CharT, _Traits, _Alloc>::basic_string(const _CharT, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char32_t; _Traits = std::char_traits; _Alloc = std::allocator; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’ /usr/include/c++/8/bits/basic_string.h:6730:95: required from here /usr/include/c++/8/bits/basic_string.tcc:1067:1: error: cannot call member function ‘void std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_M_set_sharable() [with _CharT = char32_t; _Traits = std::char_traits; _Alloc = std::allocator]’ without object error: command '/usr/local/cuda/bin/nvcc' failed with exit status 1

yjt9299 commented 3 years ago

宁好,我再gtx3090,cuda=11.0,pytorch=1.7的环境下尝试安装,但是报错nvcc fatal : Unsupported gpu architecture 'compute_86',请问这种情况该怎么处理呢?

Hi, you can try this line: export TORCH_CUDA_ARCH_LIST="7.5" in your comannd window, then to build xxxx. I use this sentence to overcome this problem since 86 is too high for this cuda program.

can you help me....

yjt9299 commented 3 years ago

宁好,我再gtx3090,cuda=11.0,pytorch=1.7的环境下尝试安装,但是报错nvcc fatal : Unsupported gpu architecture 'compute_86',请问这种情况该怎么处理呢?

Hi, you can try this line: export TORCH_CUDA_ARCH_LIST="7.5" in your comannd window, then to build xxxx. I use this sentence to overcome this problem since 86 is too high for this cuda program.

what is the means of build ****

Do we follow the install instruction:

git clone https://github.com/zhanghang1989/PyTorch-Encoding && cd PyTorch-Encoding

ubuntu

python setup.py install

yjt9299 commented 3 years ago

宁好,我再gtx3090,cuda=11.0,pytorch=1.7的环境下尝试安装,但是报错nvcc fatal : Unsupported gpu architecture 'compute_86',请问这种情况该怎么处理呢?

请问你是根据 https://hangzhang.org/PyTorch-Encoding/notes/compile.html 这里的安装教程进行安装的吗 可不可以指导我一下。。。

JoanneZZH commented 3 years ago

Hi.  Last "build xxx" means we follow the instruction provided by the author to install module "encoding" which needs build again on your machine. 

---原始邮件--- 发件人: @.> 发送时间: 2021年4月8日(周四) 晚上11:36 收件人: @.>; 抄送: @.**@.>; 主题: Re: [zhanghang1989/PyTorch-Encoding] gtx3090,cuda=11.0,pytorch=1.7, 报错nvcc fatal : Unsupported gpu architecture 'compute_86' (#383)

宁好,我再gtx3090,cuda=11.0,pytorch=1.7的环境下尝试安装,但是报错nvcc fatal : Unsupported gpu architecture 'compute_86',请问这种情况该怎么处理呢?

Hi, you can try this line: export TORCH_CUDA_ARCH_LIST="7.5" in your comannd window, then to build xxxx. I use this sentence to overcome this problem since 86 is too high for this cuda program.

what is the means of build ****

Do we follow the install instruction:

git clone https://github.com/zhanghang1989/PyTorch-Encoding && cd PyTorch-Encoding

ubuntu

python setup.py install

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.

JoanneZZH commented 3 years ago

Hi. I tried to install this module on windows or Ubuntu with several settings following the instruction doc.  The only successful setting in my trying is "Ubuntu-20.4systemVersion+cuda10.1+pytorch1.6". Maybe there are other options of environments support install it. You may try my configuration.  BTW, I think OS version and cuda version is very important in this job.  Good luck. 

---原始邮件--- 发件人: @.> 发送时间: 2021年4月8日(周四) 晚上11:41 收件人: @.>; 抄送: @.**@.>; 主题: Re: [zhanghang1989/PyTorch-Encoding] gtx3090,cuda=11.0,pytorch=1.7, 报错nvcc fatal : Unsupported gpu architecture 'compute_86' (#383)

宁好,我再gtx3090,cuda=11.0,pytorch=1.7的环境下尝试安装,但是报错nvcc fatal : Unsupported gpu architecture 'compute_86',请问这种情况该怎么处理呢?

请问你是根据 https://hangzhang.org/PyTorch-Encoding/notes/compile.html 这里的安装教程进行安装的吗 可不可以指导我一下。。。

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.

yjt9299 commented 3 years ago

okay , i try it again....

------------------ 原始邮件 ------------------ 发件人: "zhanghang1989/PyTorch-Encoding" @.>; 发送时间: 2021年4月8日(星期四) 晚上11:53 @.>; @.**@.>; 主题: Re: [zhanghang1989/PyTorch-Encoding] gtx3090,cuda=11.0,pytorch=1.7, 报错nvcc fatal : Unsupported gpu architecture 'compute_86' (#383)

Hi. I tried to install this module on windows or Ubuntu with several settings following the instruction doc.&nbsp; The only successful setting in my trying is "Ubuntu-20.4systemVersion+cuda10.1+pytorch1.6". Maybe there are other options of environments support install it. You may try my configuration.&nbsp; BTW, I think OS version and cuda version is very important in this job.&nbsp; Good luck.&nbsp;

---原始邮件--- 发件人: @.&gt; 发送时间: 2021年4月8日(周四) 晚上11:41 收件人: @.&gt;; 抄送: @.**@.&gt;; 主题: Re: [zhanghang1989/PyTorch-Encoding] gtx3090,cuda=11.0,pytorch=1.7, 报错nvcc fatal : Unsupported gpu architecture 'compute_86' (#383)

宁好,我再gtx3090,cuda=11.0,pytorch=1.7的环境下尝试安装,但是报错nvcc fatal : Unsupported gpu architecture 'compute_86',请问这种情况该怎么处理呢?

请问你是根据 https://hangzhang.org/PyTorch-Encoding/notes/compile.html 这里的安装教程进行安装的吗 可不可以指导我一下。。。

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe. — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.

zwyking commented 3 years ago

okay , i try it again.... ------------------ 原始邮件 ------------------ 发件人: "zhanghang1989/PyTorch-Encoding" @.>; 发送时间: 2021年4月8日(星期四) 晚上11:53 @.>; @.**@.>; 主题: Re: [zhanghang1989/PyTorch-Encoding] gtx3090,cuda=11.0,pytorch=1.7, 报错nvcc fatal : Unsupported gpu architecture 'compute_86' (#383) Hi. I tried to install this module on windows or Ubuntu with several settings following the instruction doc.&nbsp; The only successful setting in my trying is "Ubuntu-20.4systemVersion+cuda10.1+pytorch1.6". Maybe there are other options of environments support install it. You may try my configuration.&nbsp; BTW, I think OS version and cuda version is very important in this job.&nbsp; Good luck.&nbsp; ---原始邮件--- 发件人: @.&gt; 发送时间: 2021年4月8日(周四) 晚上11:41 收件人: @.&gt;; 抄送: @.**@.&gt;; 主题: Re: [zhanghang1989/PyTorch-Encoding] gtx3090,cuda=11.0,pytorch=1.7, 报错nvcc fatal : Unsupported gpu architecture 'compute_86' (#383) 宁好,我再gtx3090,cuda=11.0,pytorch=1.7的环境下尝试安装,但是报错nvcc fatal : Unsupported gpu architecture 'compute_86',请问这种情况该怎么处理呢? 请问你是根据 https://hangzhang.org/PyTorch-Encoding/notes/compile.html 这里的安装教程进行安装的吗 可不可以指导我一下。。。 — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe. — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.

Hi, i think you should check your nvcc version. My nvcc version is 11.0.