Closed lmmx closed 3 years ago
I ran into this same issue with Cuda version 11 on Linux Mint 19.3 and CMake 3.20.1
Trying the above code, I got an error from CMake from at the line
list( INSERT CUDA_ARCHITECTURES "5.0" "5.2" )
Looks like the insert command is missing the list index
I changed this to
list( INSERT CUDA_ARCHITECTURES 0 "5.0" "5.2" )
and CMake succeeded
Ahhh that would explain it, I just cheated and specified the architectures I wanted explicitly, thanks
Thanks for reporting the issue and the extensive documentation. In the most recent commit, I adapted CMake with your proposed changes including the comments by ptbrown above.
The automatic CUDA architecture assignment has been revised in the latest commit.
When trying to build Gpufit on Linux with CUDA 11, I received an error that
compute_30
was an "unsupported architecture".This arises since I have CUDA 11 however the first 2 architectures in the
CUDA_ARCHITECTURES
list are 3.0 and 3.5:The problem was that since CUDA 11,
compute_30
is deprecated, as documented hereIt was also mentioned in a comment on the post that "Support for Kepler sm_30 and sm_32 architecture based products is dropped.", i.e. 3.2 should go alongside the 3.0, 3.5, and 3.7 in the table within the blog post (reproduced above)
This explains why
compute_30
was throwing an error and suggests how to fix it. Simply test if the CUDA version is greater than or equal to 11, and then skip the architectures from 37 and below.To resolve this I changed the
Gpufit/Gpufit/CMakeLists.txt
to create an empty list instead, and while I was fixing it for my current architecture I thought I should also future proof it for CUDA 12 and post it hereI cross referenced against the CUDA docs for Ampere, Volta, and Turing
I considered using
VERSION_GREATER_EQUAL
but apparently this breaks backward compatibility with cmake pre-3.7When building, I had to skip this control flow block altogether by passing cmake
-DCUDA_ARCHITECTURES="8.0 8.6+PTX"
, perhaps this could be recommended in the docs.Happy to submit a pull request with this if you'd like me to.