Open bdice opened 2 years ago
This issue has been labeled inactive-30d
due to no recent activity in the past 30 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed. This issue will be labeled inactive-90d
if there is no activity in the next 60 days.
This issue has been labeled inactive-90d
due to no recent activity in the past 90 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed.
Just to note, this will likely be possible with cuda 12.3, so we should revisit then.
An API for getting the local CUDA runtime version was added in cuda-python 12.3, as cudart.getLocalRuntimeVersion()
.
12.3.0 release notes: https://nvidia.github.io/cuda-python/release/12.3.0-notes.html
It was also backported to cuda-python 11.8.3: https://nvidia.github.io/cuda-python/release/11.8.3-notes.html
To resolve this issue, we should switch to using that API and update the minimum version requirements. However, we may not be able to update to cuda-python 12.3 yet. I'm not sure, it depends on how the conda-forge cuda-version
and CUDA Toolkit compatibility works with cuda-python packages. I don't think it was backported to cuda-python 12.2.1.
PR #946 introduces a workaround for an issue in cuda-python that fetches a hardcoded CUDA runtime version. https://github.com/NVIDIA/cuda-python/issues/16
Once that upstream issue is resolved, we should update the implementation in RMM to use
cuda.cudart.cudaRuntimeGetVersion()
instead ofnumba.cuda.runtime.get_version()
._Originally posted by @jakirkham in https://github.com/rapidsai/rmm/pull/946#discussion_r788126573_