Mozilla-Ocho / llamafile

Distribute and run LLMs with a single file.
https://llamafile.ai
Other
16.71k stars 827 forks source link

Bug: stuck in extracting /zip/ggml-rocm.dll to /C/Users/x/.llamafile/v/0.8.7/ggml-rocm.dll #476

Open xd2333 opened 3 days ago

xd2333 commented 3 days ago

Contact Details

No response

What happened?

stuck in extracting /zip/ggml-rocm.dll to /C/Users/x/.llamafile/v/0.8.7/ggml-rocm.dll

i'v test my pc and laptop with nvidia card,both stuck in this step

Version

v0.8.7

What operating system are you seeing the problem on?

No response

Relevant log output

E:\llm\QWEN\qwen1_5-7b-chat-q4_0>llamafile.exe -m qwen1_5-7b-chat-q4_0.gguf -ngl 999
import_cuda_impl: initializing gpu module...
get_rocm_bin_path: note: amdclang++.exe not found on $PATH
get_rocm_bin_path: note: $HIP_PATH/bin/amdclang++.exe does not exist
get_rocm_bin_path: note: /opt/rocm/bin/amdclang++.exe does not exist
get_rocm_bin_path: note: clang++.exe not found on $PATH
get_rocm_bin_path: note: $HIP_PATH/bin/clang++.exe does not exist
get_rocm_bin_path: note: /opt/rocm/bin/clang++.exe does not exist
import_cuda_impl: won't compile AMD GPU support because $HIP_PATH/bin/clang++ is missing
extracting /zip/ggml-rocm.dll to /C/Users/x/.llamafile/v/0.8.7/ggml-rocm.dll
amonpaike commented 2 days ago

same problem on my pc whit Nvidia RTX 3060 12GB