Closed pearu closed 5 years ago
Those values are currently generated from the install_nvcc.sh
script, but aren't very friendly to a user changing their environment / CUDA version which is userspace. @jakirkham @mike-wendt @raydouglass any thoughts on how we should best handle this?
As someone who bounces between multiple installed CUDA versions, I would be in favor of moving the CUDA_HOME
handling into the activate.d
script so that if I change my user environment to switch from say CUDA 9.2 to 10.1 that it's nicely handled.
I raised this issue in https://github.com/conda-forge/staged-recipes/pull/8229 and @jakirkham mentioned that he did not want to support building outside of the conda-forge docker image. But we should support that in my opinion.
If there are other people willing to handle issues that come up from this addition, I wouldn't be opposed to seeing things become more flexible.
Issue: The activate script in conda package, say in https://anaconda.org/conda-forge/nvcc_linux-64/10.1/download/linux-64/nvcc_linux-64-10.1-h3d80acd_0.tar.bz2, contains hardcoded
/usr/local/cuda
and ignoresCUDA_HOME
environment variable:It looks like install_nvcc.sh (that creates the activate script) should escape using
CUDA_HOME
when building the conda package.