Open atelepov opened 1 month ago
Support HIP_UMA was added in llama.cpp PR: https://github.com/ggerganov/llama.cpp/pull/7414
Support ROCm can be implemented by example: https://github.com/likelovewant/ROCmLibs-for-gfx1103-AMD780M-APU
This all can speedup working with llm on ROCm and AMD 780M APU GPU
Support HIP_UMA was added in llama.cpp PR: https://github.com/ggerganov/llama.cpp/pull/7414
Support ROCm can be implemented by example: https://github.com/likelovewant/ROCmLibs-for-gfx1103-AMD780M-APU
This all can speedup working with llm on ROCm and AMD 780M APU GPU