nomic-ai / gpt4all

GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
https://nomic.ai/gpt4all
MIT License
70.58k stars 7.7k forks source link

Issue: WSL - Unable to retrieve list of all GPU devices #1907

Closed solidstudio closed 9 months ago

solidstudio commented 9 months ago

Issue you'd like to raise.

Using WSL2 with nvidia gpu gives the error Unable to retrieve list of all GPU devices

Suggestion:

nvidia-smi shows the gpu and vulkaninfo --summary returns:

Devices:

GPU0: apiVersion = 4202763 (1.2.267) driverVersion = 96481284 (0x5c03004) vendorID = 0x10de deviceID = 0x249c deviceType = PHYSICAL_DEVICE_TYPE_DISCRETE_GPU deviceName = Microsoft Direct3D12 (NVIDIA GeForce RTX 3080 Laptop GPU) driverID = UNKNOWN_VkDriverId_value23 driverName = Dozen driverInfo = Mesa 23.3.4 - kisak-mesa PPA conformanceVersion = 0.0.0.0 deviceUUID = ddf309a8-1937-9e35-5423-e7159ec3de34 driverUUID = 7c3e0565-47b3-25bb-3388-45a9392cdd44 GPU1: apiVersion = 4206859 (1.3.267) driverVersion = 1 (0x0001) vendorID = 0x10005 deviceID = 0x0000 deviceType = PHYSICAL_DEVICE_TYPE_CPU deviceName = llvmpipe (LLVM 15.0.7, 256 bits) driverID = DRIVER_ID_MESA_LLVMPIPE driverName = llvmpipe driverInfo = Mesa 23.3.4 - kisak-mesa PPA (LLVM 15.0.7) conformanceVersion = 1.3.1.1 deviceUUID = 6d657361-3233-2e33-2e34-202d206b6900 driverUUID = 6c6c766d-7069-7065-5555-494400000000

cebtenzzre commented 9 months ago

Why are you running GPT4All in WSL? This is not a recommended configuration. Applications that use the GPU are best run natively.

SimLeek commented 9 months ago

First, can you compile and run a simple Vulkan test program on WSL2? Not just vulkaninfo, but actually using the GPU to do something. Does this triangle work? [link]

Second, is your GPU the 8GB version or the 16GB version? Does it actually have enough memory to run the model, because looking into the code here and here shows that you'll get that error if you don't have enough GPU memory.

Third, if all else fails to diagnose the problem, you're probably going to want to open an IDE/debugger yourself and see what specific variables are going into those functions, because I don't think there's enough info, and I also don't think WSL GPU support is a high priority even in WSL: link

cebtenzzre commented 9 months ago

Second, is your GPU the 8GB version or the 16GB version? Does it actually have enough memory to run the model, because looking into the code here and here shows that you'll get that error if you don't have enough GPU memory.

A discrete RTX 3080 should show up in any case. The memory check doesn't actually do anything since GGUF support was added, and the GPU is certainly new enough to pass the feature requirements (we even added Maxwell support recently, release coming soon).

But that's assuming you're using the real Nvidia driver natively on Linux or Windows :)

apage43 commented 9 months ago

the real nvidia driver isn't exposed directly to WSL2 - it only gets access to CUDA and DirectX12 passthrough devices.

Using OpenGL or Vulkan under WSL2 is not supported directly. Recent Mesa has a Vulkan->DX12 translation layer, but it isn't enabled in the default Ubuntu packages yet and is likely not complete enough to run gpt4all

cebtenzzre commented 9 months ago

This is not an issue with GPT4All, then. You may want to report your experience to the developers of WSL or kisak-mesa.