likelovewant / ROCmLibs-for-gfx1103-AMD780M-APU

ROCm Library Files for gfx1103 and update with others arches based on AMD GPUs for use in Windows.
GNU General Public License v3.0
135 stars 14 forks source link

Can it theoretically be used for text-generation-webui? #1

Closed HysterLc closed 7 months ago

likelovewant commented 8 months ago

sure ,you can try . currentlly support , sd.next and lshqqytiger/stable-diffusion-webui-directml.git ,offical comfy UI with the code tweeked .other have not test yet .

HysterLc commented 8 months ago

sure ,you can try . currentlly support , sd.next and lshqqytiger/stable-diffusion-webui-directml.git ,offical comfy UI with the code tweeked .other have not test yet .

i have tested it in text generation. successfully loaded model. it is very close. UDA error: the requested functionality is not supported current device: 0, in function ggml_cuda_op_mul_mat_cublas at D:\a\llama-cpp-python-cuBLAS-wheels\llama-cpp-python-cuBLAS-wheels\vendor\llama.cpp\ggml-cuda.cu:9729 cublasSgemm_v2(g_cublas_handles[id], CUBLAS_OP_T, CUBLAS_OP_N, row_diff, src1_ncols, ne10, &alpha, src0_ddf_i, ne00, src1_ddf1_i, ne10, &beta, dst_dd_i, ldc) GGML_ASSERT: D:\a\llama-cpp-python-cuBLAS-wheels\llama-cpp-python-cuBLAS-wheels\vendor\llama.cpp\ggml-cuda.cu:255: !"CUDA error" it seems need another lib named cuBLAS

HysterLc commented 8 months ago

sure ,you can try . currentlly support , sd.next and lshqqytiger/stable-diffusion-webui-directml.git ,offical comfy UI with the code tweeked .other have not test yet .

i have tested it in text generation. successfully loaded model. it is very close. UDA error: the requested functionality is not supported current device: 0, in function ggml_cuda_op_mul_mat_cublas at D:\a\llama-cpp-python-cuBLAS-wheels\llama-cpp-python-cuBLAS-wheels\vendor\llama.cpp\ggml-cuda.cu:9729 cublasSgemm_v2(g_cublas_handles[id], CUBLAS_OP_T, CUBLAS_OP_N, row_diff, src1_ncols, ne10, &alpha, src0_ddf_i, ne00, src1_ddf1_i, ne10, &beta, dst_dd_i, ldc) GGML_ASSERT: D:\a\llama-cpp-python-cuBLAS-wheels\llama-cpp-python-cuBLAS-wheels\vendor\llama.cpp\ggml-cuda.cu:255: !"CUDA error" it seems need another lib named cuBLAS

i replaced two libs from your repository then it worked! both text and stable diffusion

likelovewant commented 8 months ago

That's great. Now webui forge for amd also available. You may try if you want to experience another version of SD.

HysterLc commented 8 months ago

Of course.i will try it next weekend

abiwin0 commented 2 months ago

sure ,you can try . currentlly support , sd.next and lshqqytiger/stable-diffusion-webui-directml.git ,offical comfy UI with the code tweeked .other have not test yet .

i have tested it in text generation. successfully loaded model. it is very close. UDA error: the requested functionality is not supported current device: 0, in function ggml_cuda_op_mul_mat_cublas at D:\a\llama-cpp-python-cuBLAS-wheels\llama-cpp-python-cuBLAS-wheels\vendor\llama.cpp\ggml-cuda.cu:9729 cublasSgemm_v2(g_cublas_handles[id], CUBLAS_OP_T, CUBLAS_OP_N, row_diff, src1_ncols, ne10, &alpha, src0_ddf_i, ne00, src1_ddf1_i, ne10, &beta, dst_dd_i, ldc) GGML_ASSERT: D:\a\llama-cpp-python-cuBLAS-wheels\llama-cpp-python-cuBLAS-wheels\vendor\llama.cpp\ggml-cuda.cu:255: !"CUDA error" it seems need another lib named cuBLAS

i replaced two libs from your repository then it worked! both text and stable diffusion

Excuse me, how did you get it to work? I also load the model correctly, but when generating it, it shows me a cuBLAS-wheels error. Which libraries did you replace? Thanks in advance.