getkeops / keops

KErnel OPerationS, on CPUs and GPUs, with autodiff and without memory overflows
https://www.kernel-operations.io
MIT License
1.03k stars 65 forks source link

Issue when install pykeops #313

Open Yeah2333 opened 1 year ago

Yeah2333 commented 1 year ago

I use pip to install pykeops, and system cuda version is 10.2. But i didn't export cuda path in zshrc. So the pytorch using cuda tool kit 11.3 which in conda env. But when import pykeops, there is an error: `Python 3.9.16 (main, Mar 8 2023, 14:00:05) [GCC 11.2.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information.

import pykeops [KeOps] Compiling cuda jit compiler engine ... [KeOps] Warning : There were warnings or errors compiling formula : In file included from /home/wangzhiyong/DataDriver/anaconda3/envs/Easy-KPConv/lib/python3.9/site-packages/keopscore/binders/nvrtc/nvrtc_jit.cpp:21:0: /home/wangzhiyong/DataDriver/anaconda3/envs/Easy-KPConv/lib/python3.9/site-packages/keopscore/binders/nvrtc/nvrtc_jit.cpp: In function ‘int Compile(const char, const char, int, int, const char*)’:

:0:20: error: ‘nvrtcGetCUBINSize’ was not declared in this scope /home/wangzhiyong/DataDriver/anaconda3/envs/Easy-KPConv/lib/python3.9/site-packages/keopscore/include/utils_pe.h:6:26: note: in definition of macro ‘NVRTC_SAFE_CALL’ nvrtcResult result = x; \ ^ /home/wangzhiyong/DataDriver/anaconda3/envs/Easy-KPConv/lib/python3.9/site-packages/keopscore/binders/nvrtc/nvrtc_jit.cpp:90:21: note: in expansion of macro ‘nvrtcGetTARGETSize’ NVRTC_SAFE_CALL(nvrtcGetTARGETSize(prog, &targetSize)); `

It seeems error with nvrtcGetCUBINSize. And i find a issue about nvrtcGetCUBINSize is https://github.com/getkeops/keops/issues/236#issue-1184859865.

jeanfeydy commented 1 year ago

Hi @Yeah2333 ,

Thanks for your interest in our library! As discussed in the issue #236, different CUDA versions are not compatible with each other. You should make sure that your program has access to the correct version of CUDA, by setting the environment variable CUDA_PATH to a suitable value - see here for details. (I would tend to prefer keeping compatibility between KeOps and PyTorch, but I do not know how crucial this is.)

Alternatively, a simpler option would be to use our official Docker image via Docker / Singularity, as documented here. You may customize your configuration by tweaking our Dockerfile. On heterogeneous configurations that already contain different versions of CUDA, this is really the recommended, hassle-free option.

What do you think?