Closed taha-yassine closed 1 year ago
DrJit should pick up the GPU driver library and hence uses the driver API rather than the runtime API (installed through the CUDA toolkit). Is your GPU driver up to date too? What version are you on?
I don't know what you're planning to do, but raytracing opperations are not supported on WSL I believe. This is not a limitation on our end, but rather Nvidia's OptiX. You should take a look at that if this is your case.
For the full context, I'm trying to use Sionna which relies on Mitsuba. I think what I'm trying to do with it uses raytracing, but I'm not sure.
I've setup CUDA following the official guide. I don't know if it's relevant, but there it is stated that "the CUDA driver installed on Windows host will be stubbed inside the WSL 2 as libcuda.so
". So if I understand correctly, I installed a special version of the CUDA toolkit which doesn't come with the driver and rather relies on the driver installer on the Windows side. The driver version is 528.49.
If there is any other information I can provide to help further investigate I would be happy to do so.
Indeed, this will require raytracing and is not supported [1]. You will need to wait for OptiX to be supported in WSL.
[1] https://forums.developer.nvidia.com/t/optix-on-wsl-windows-subsystem-for-linux/140297/19
Ok thanks. Is there a way I can force DrJit to use LLVM and run on CPU in the meantime?
DrJit defines the backend to use through its type system. dr.llvm.Float
is on the CPU dr.cuda.Float
is on the GPU. There most likely is some setting in Sionna to define this globally.
I'm trying to import drjit in python but I get the following error right after the import statement:
jit_cuda_api_init(): could not find symbol "cuDevicePrimaryCtxRelease_v2" -- disabling CUDA backend!
After some digging, it seems like the error happens when trying to importdrjit_ext
in__init__.py
so it's on the C++ side. I'm using WSL with the latest CUDA driver installed on Windows and CUDA Toolkit version 11.8. Thanks!