oobabooga / text-generation-webui

A Gradio web UI for Large Language Models.
GNU Affero General Public License v3.0
38.24k stars 5.07k forks source link

AMDGPU is broken on 1.8 #6169

Open pl752 opened 1 week ago

pl752 commented 1 week ago

Describe the bug

llamacpp doesn't see radeon rx6900xt, previous version worked fine, it seems it has missing dependencies (rocm 5.7.1 is installed) in particular llama_cpp_cuda can not be imported

pl752@pl752-desktop:~/text-generation-webui-1.8$ source installer_files/conda/bin/activate 
(base) pl752@pl752-desktop:~/text-generation-webui-1.8$ conda activate installer_files/env/
(/home/pl752/text-generation-webui-1.8/installer_files/env) pl752@pl752-desktop:~/text-generation-webui-1.8$ python
Python 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import llama_cpp
>>> import llama_cpp_cuda
Traceback (most recent call last):
  File "/home/pl752/text-generation-webui-1.8/installer_files/env/lib/python3.11/site-packages/llama_cpp_cuda/llama_cpp.py", line 70, in _load_shared_library
    return ctypes.CDLL(str(_lib_path), **cdll_args)  # type: ignore
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/pl752/text-generation-webui-1.8/installer_files/env/lib/python3.11/ctypes/__init__.py", line 376, in __init__
    self._handle = _dlopen(self._name, mode)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: libhipblas.so.1: cannot open shared object file: No such file or directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/pl752/text-generation-webui-1.8/installer_files/env/lib/python3.11/site-packages/llama_cpp_cuda/__init__.py", line 1, in <module>
    from .llama_cpp import *
  File "/home/pl752/text-generation-webui-1.8/installer_files/env/lib/python3.11/site-packages/llama_cpp_cuda/llama_cpp.py", line 83, in <module>
    _lib = _load_shared_library(_lib_base_name)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/pl752/text-generation-webui-1.8/installer_files/env/lib/python3.11/site-packages/llama_cpp_cuda/llama_cpp.py", line 72, in _load_shared_library
    raise RuntimeError(f"Failed to load shared library '{_lib_path}': {e}")
RuntimeError: Failed to load shared library '/home/pl752/text-generation-webui-1.8/installer_files/env/lib/python3.11/site-packages/llama_cpp_cuda/libllama.so': libhipblas.so.1: cannot open shared object file: No such file or directory

Is there an existing issue for this?

Reproduction

load any model with llamacpp_hf on amdgpu

only cpu would be utilized

Screenshot

No response

Logs

llm_load_tensors: ggml ctx size =    0.15 MiB
llm_load_tensors:        CPU buffer size =  8137.64 MiB
..........

System Info

Distributor ID: Ubuntu
Description:    Ubuntu 22.04.4 LTS
Release:    22.04
Codename:   jammy
x86_64 ryzen 9 5900x
128gb ram, nvme
radeon rx6900xt
pl752 commented 1 week ago

Also libomp.so is missing

sandbox404 commented 1 week ago

the same with me. Despite I choose AMD version during the install, 1.8 cannot use amdgpu, it just run on CPU. I'm rolling back to snapshot-2024-04-28.