meta-llama / llama3

The official Meta Llama 3 GitHub site
Other
22.61k stars 2.36k forks source link

ggml_cuda_init:failed to initialize CUDA:initialization error #215

Open sushaofeng123 opened 1 month ago

sushaofeng123 commented 1 month ago

help, I have deployed the large model Ollama on an offline environment with Ubuntu 18.04.3, and when running the llama3:8b model, I found that the GPU was not being used, only the CPU was being utilized. Upon checking the logs, I discovered an error message: 'ggml_cuda_init: failed to initialize CUDA: initialization error'." 1 2

but this do not work. What should I do?

pytorch version: 1.4.0 CUDA version: 10.0 GPU configuration: NVIDIA T4

MightyStud commented 1 month ago

have you checked minimum required CUDA version / NVIDIA driver version for latest ggml? also you can check ggml, and llama.cpp repos for more help on this issue