Closed abiwin0 closed 1 month ago
"I hadn't tested it in Text-Generation-Webui previously, The two libs included arerocblas.dll
and library. However, ComfyUI and SD utilize raley on 'zluda'.
To make these models work on Windows using ROCm,you need zluda,and use 'zluda' renames essential CUDA libraries:
cublas.dll
becomes cublas64_11.dll
cusparse.dll
becomes cusparse64_11.dll
nvrtc.dll
becomes nvrtc64_112_0.dll
These renamed files replace the originals in the torch/libs
directory, enabling ROCm functionality. Without this
renaming process, it won't work.
Alternatively, you may skip llama-cpp-python-cuBLAS-wheels
or rename zluda steps. test directly install llama-cpp-python_cuda-0.2.15+rocm5.5.1-cp38-cp38-win_amd64.whl
. This pre-built wheel file from llama_cpp_pthon_cuda-0.2.15+rocm5.5.1-cp38-cp38-win_amd64.whl in the text webui ,test if it work."
@abiwin0
As there has been no response and this issue is not caused by this repository, I am closing it.
Hi!
I use the lib rocblas.for.gfx90c.workable that works perfectly on my Ryzen 7 5700g, it has worked without problems in Comfy and Stable Difussion, but when I try to use it in Text-Generation-Webui, it starts and loads the model, but when I click on Generate I get the following error:
Prompt evaluation: 0%| | 0/1 [00:00<?, ?it/s]CUDA error: named symbol not found current device: 0, in function launch_mul_mat_q at D:\a\llama-cpp-python-cuBLAS-wheels\llama-cpp-python-cuBLAS-wheels\vendor\llama.cpp\ggml\src\ggml-cuda\template-instances../mmq.cuh:2770 cudaFuncSetAttribute(mul_mat_q<type, mmq_x, 8, false>, cudaFuncAttributeMaxDynamicSharedMemorySize, shmem) D:\a\llama-cpp-python-cuBLAS-wheels\llama-cpp-python-cuBLAS-wheels\vendor\llama.cpp\ggml\src\ggml-cuda.cu:101: CUDA error
Any idea how to fix this? I read in a closed question that someone managed to start it but he only said that he replaced 2 libs but he didn't mention which ones and how he did it, could you tell me how to fix the error? Thanks for your great work and thanks in advance for your help