llamacpp doesn't see radeon rx6900xt, previous version worked fine, it seems it has missing dependencies (rocm 5.7.1 is installed)
in particular llama_cpp_cuda can not be imported
pl752@pl752-desktop:~/text-generation-webui-1.8$ source installer_files/conda/bin/activate
(base) pl752@pl752-desktop:~/text-generation-webui-1.8$ conda activate installer_files/env/
(/home/pl752/text-generation-webui-1.8/installer_files/env) pl752@pl752-desktop:~/text-generation-webui-1.8$ python
Python 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import llama_cpp
>>> import llama_cpp_cuda
Traceback (most recent call last):
File "/home/pl752/text-generation-webui-1.8/installer_files/env/lib/python3.11/site-packages/llama_cpp_cuda/llama_cpp.py", line 70, in _load_shared_library
return ctypes.CDLL(str(_lib_path), **cdll_args) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pl752/text-generation-webui-1.8/installer_files/env/lib/python3.11/ctypes/__init__.py", line 376, in __init__
self._handle = _dlopen(self._name, mode)
^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: libhipblas.so.1: cannot open shared object file: No such file or directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/pl752/text-generation-webui-1.8/installer_files/env/lib/python3.11/site-packages/llama_cpp_cuda/__init__.py", line 1, in <module>
from .llama_cpp import *
File "/home/pl752/text-generation-webui-1.8/installer_files/env/lib/python3.11/site-packages/llama_cpp_cuda/llama_cpp.py", line 83, in <module>
_lib = _load_shared_library(_lib_base_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pl752/text-generation-webui-1.8/installer_files/env/lib/python3.11/site-packages/llama_cpp_cuda/llama_cpp.py", line 72, in _load_shared_library
raise RuntimeError(f"Failed to load shared library '{_lib_path}': {e}")
RuntimeError: Failed to load shared library '/home/pl752/text-generation-webui-1.8/installer_files/env/lib/python3.11/site-packages/llama_cpp_cuda/libllama.so': libhipblas.so.1: cannot open shared object file: No such file or directory
Describe the bug
llamacpp doesn't see radeon rx6900xt, previous version worked fine, it seems it has missing dependencies (rocm 5.7.1 is installed) in particular llama_cpp_cuda can not be imported
Is there an existing issue for this?
Reproduction
load any model with llamacpp_hf on amdgpu
only cpu would be utilized
Screenshot
No response
Logs
System Info