Closed oliverhu closed 4 months ago
I don't know the solution, but if you want to use llama.cpp with your gpu in the meantime you might want to try it with CLBLAST instead of ROCm, it should give you a significant speedup compared to cpu-only, not as good as ROCm should give but it should get you close.
This issue is missing info, please share the commands used to build llama.cpp, output of rocminfo and the full output of llama.cpp.
I know is not the topic but I wanted to compare speed the strongest intel 13900k to strongest amd 7950x3d running model on cpu only .
Same model size and quantization
look
main.exe --model models\new3\ultralm-13b-v2.0.Q4_0.gguf --mlock --color --threads 16 --keep -1 --batch_size 512 --n_predict -1 --top_k 40 --top_p 0.9 --temp 0.96 --repeat_penalty 1.1 --ctx_size 4096 --interactive --instruct -ngl 0
llama_print_timings: load time = 13046.85 ms
llama_print_timings: sample time = 11.69 ms / 85 runs ( 0.14 ms per token, 7273.04 tokens per second)
llama_print_timings: prompt eval time = 2055.02 ms / 77 tokens ( 26.69 ms per token, 37.47 tokens per second)
llama_print_timings: eval time = 10850.53 ms / 85 runs ( 127.65 ms per token, 7.83 tokens per second)
llama_print_timings: total time = 15315.68 ms
Almost the same performance , slightly faster on 7950x3d for answers but for some reason prompting I had almost 10x faster ... Another difference is just 2x less energy used for the task on amd cpu ;P
Same issue on RX 560, granted it's an older card.
I updated the question with all the details (should be more than enough..). In the meantime, I tested RTX 4700 TI... it is probably 10x faster than RX7900XTX...
4700ti 56.23 tokens
llama_print_timings: load time = 824.29 ms
llama_print_timings: sample time = 52.74 ms / 128 runs ( 0.41 ms per token, 2427.18 tokens per second)
llama_print_timings: prompt eval time = 0.00 ms / 1 tokens ( 0.00 ms per token, inf tokens per second)
llama_print_timings: eval time = 2276.23 ms / 128 runs ( 17.78 ms per token, 56.23 tokens per second)
llama_print_timings: total time = 2357.70 ms
Log end
7900XTX 5.62 tokens per second
llama_print_timings: load time = 6432.57 ms
llama_print_timings: sample time = 32.92 ms / 128 runs ( 0.26 ms per token, 3888.10 tokens per second)
llama_print_timings: prompt eval time = 0.00 ms / 1 tokens ( 0.00 ms per token, inf tokens per second)
llama_print_timings: eval time = 22756.97 ms / 128 runs ( 177.79 ms per token, 5.62 tokens per second)
llama_print_timings: total time = 22857.59 ms
The RX 560 may be slower in part because it's using the fallback code for __dp4a()
and its isa lacks a corresponding opcode and the compiler may not be choosing the fastest instructions, or it might potentially choose not to unroll a loop later on because it emitted too many instructions per dp4a.
Could you try this commit and see if the RX 560 is any faster? https://github.com/Engininja2/llama.cpp/commit/23510efd1f2c9975ade37904a261cc7df7dd008a
As for the RX 7900XTX, I can't think of anything. The PR for RDNA mul_mat_q tunings has someone reporting solid speeds for that gpu https://github.com/ggerganov/llama.cpp/pull/2910#issuecomment-1711240949 Maybe an environment variable like LLAMA_DEBUG could be set and slowing things down, but I think that would affect the other build just as much.
Wow I think you just fixed it for my RX560 card @Engininja2 , thank you so much!
I will do some more testing and let you know how it goes.
@oliverhu if you didn't compile the binary yourself (or compiled the binary on a machine with a different card) try doing that. By default, you will only get support for the current card at compile time, unless AMDGPU_TARGETS
and/or GPU_TARGETS
are set, which has bitten me since I have a random assortment of cards across several machines. I had a gfx1100 recently and inference was very fast (and much faster than a big recent Xeon doing CPU inference) when compiled with that in the supported rocm architectures list.
@oliverhu if you didn't compile the binary yourself (or compiled the binary on a machine with a different card) try doing that. By default, you will only get support for the current card at compile time, unless
AMDGPU_TARGETS
and/orGPU_TARGETS
are set, which has bitten me since I have a random assortment of cards across several machines. I had a gfx1100 recently and inference was very fast (and much faster than a big recent Xeon doing CPU inference) when compiled with that in the supported rocm architectures list.
that's why I keep the amdgpu targets in my makefile set to:
GPU_TARGETS ?= gfx803 gfx900 gfx906 gfx908 gfx90a gfx1030 gfx1100 $(shell $(ROCM_PATH)/llvm/bin/amdgpu-arch)
that way it builds the most common targets and also uses the name of the current machine in the system
i am not sure but it may be because of Instruction Set Extensions Intel® SSE4.1, Intel® SSE4.2, Intel® AVX2 in your processor, and as readme suggests it supports vectors extentions avx2 and avx512
Thanks, it’s impressive to see so many community responses btw! I did compile locally, so I assume it uses the right arch. It doesn’t make sense to see RTX 4700 ti to be 10x faster than Radeon 7900 XTX anyway regardless of the CPU discussion…
The RX 560 may be slower in part because it's using the fallback code for
__dp4a()
and its isa lacks a corresponding opcode and the compiler may not be choosing the fastest instructions, or it might potentially choose not to unroll a loop later on because it emitted too many instructions per dp4a.Could you try this commit and see if the RX 560 is any faster? Engininja2@23510ef
As for the RX 7900XTX, I can't think of anything. The PR for RDNA mul_mat_q tunings has someone reporting solid speeds for that gpu #2910 (comment) Maybe an environment variable like LLAMA_DEBUG could be set and slowing things down, but I think that would affect the other build just as much.
Yeah the 7900XTX runs pretty fast and even faster than the scores I reported back then. I did initially notice very low scores but that was because I was compiling with a debug build (which is the default). You gotta make sure to use -DCMAKE_BUILD_TYPE=Release when building.
I just saw the build command used here - make LLAMA_HIPBLAS=1
.
This doesn't use the -O3
build flag. You'll have to specify it manually in the Makefile. LLAMA_FAST
also won't work because it doesn't add -O3
to HIPFLAGS
https://github.com/ggerganov/llama.cpp/blob/master/Makefile#L119
It's probably fixed by now Tested on 7950x and 7900xt, ubuntu 23.04, kernel xanmod 6.5.5, rocm-5.7.0 Tested on commit: eee42c670e6fa6df9cf17e7ffc319f74cbd81354
An error popped up when building HIP:
/opt/rocm-5.7.0/llvm/lib/clang/17.0.0/include/cuda_wrappers/cmath:27:15: fatal error: 'cmath' file not found
how to fix: sudo apt-get install libstdc++-13-dev
Test string: ./llama-bench --model ./models/amethyst-13b-mistral.Q4_K_M.gguf
Test results: make clean && make | model | size | params | backend | threads | test | t/s |
---|---|---|---|---|---|---|---|
llama 13B mostly Q4_K - Medium | 7.33 GiB | 13.02 B | CPU | 16 | pp 512 | 39.74 ± 0.55 | |
llama 13B mostly Q4_K - Medium | 7.33 GiB | 13.02 B | CPU | 16 | tg 128 | 7.39 ± 0.03 |
make clean && make LLAMA_OPENBLAS=1 | model | size | params | backend | threads | test | t/s |
---|---|---|---|---|---|---|---|
llama 13B mostly Q4_K - Medium | 7.33 GiB | 13.02 B | BLAS | 16 | pp 512 | 5.47 ± 0.04 | |
llama 13B mostly Q4_K - Medium | 7.33 GiB | 13.02 B | BLAS | 16 | tg 128 | 7.40 ± 0.03 |
make clean && make LLAMA_CLBLAST=1 | model | size | params | backend | ngl | test | t/s |
---|---|---|---|---|---|---|---|
llama 13B mostly Q4_K - Medium | 7.33 GiB | 13.02 B | OpenCL | 99 | pp 512 | 151.09 ± 9.03 | |
llama 13B mostly Q4_K - Medium | 7.33 GiB | 13.02 B | OpenCL | 99 | tg 128 | 30.96 ± 0.75 |
make clean && make LLAMA_HIPBLAS=1 | model | size | params | backend | ngl | test | t/s |
---|---|---|---|---|---|---|---|
llama 13B mostly Q4_K - Medium | 7.33 GiB | 13.02 B | ROCm | 99 | pp 512 | 687.51 ± 0.74 | |
llama 13B mostly Q4_K - Medium | 7.33 GiB | 13.02 B | ROCm | 99 | tg 128 | 54.94 ± 0.06 |
Not sure if this adds anything but I noticed on my RX 570, prompt ingestion was terribly slow, slower than CLBLAST or OPENBLAS, while actual inference would still be fast. Turns out it was mmq
. With the -nommq option and its equivalent in the ROCM fork of KoboldCPP, prompt ingestion sped up dramatically to a completely usable state.
Without mmq: ./main -m ../openhermes-2-mistral-7b.Q5_K_M.gguf -f prompts/dan-modified.txt -n 100 -ngl 20 --threads 6 -nommq
llama_print_timings: load time = 1268.83 ms
llama_print_timings: sample time = 6.01 ms / 20 runs ( 0.30 ms per token, 3328.34 tokens per second)
llama_print_timings: prompt eval time = 4450.21 ms / 365 tokens ( 12.19 ms per token, 82.02 tokens per second)
llama_print_timings: eval time = 2483.49 ms / 19 runs ( 130.71 ms per token, 7.65 tokens per second)
With mmq: ./main -m ../openhermes-2-mistral-7b.Q5_K_M.gguf -f prompts/dan-modified.txt -n 100 -ngl 20 --threads 6
llama_print_timings: load time = 6360.01 ms
llama_print_timings: sample time = 3.29 ms / 11 runs ( 0.30 ms per token, 3345.50 tokens per second)
llama_print_timings: prompt eval time = 106569.27 ms / 365 tokens ( 291.97 ms per token, 3.43 tokens per second)
llama_print_timings: eval time = 1369.67 ms / 10 runs ( 136.97 ms per token, 7.30 tokens per second)
llama_print_timings: total time = 107948.71 ms
12ms vs 292ms per token to process a prompt.
Takeaway is on gfx803 cards, use --nommq
and don't use the custom mul_mat_q kernels. It only occured to me is that on KoboldCPP-ROCM you have to explicitly specify to use mmq or use the launcher that defaults to using it, I couldn't understand why KoboldCPP was so much faster when running it as a command. Then realized when using the launcher with mmq as the default, the slowness was present like with llamacpp ./main.
Thanks for all the replies. None worked for me.
Observations:
-O3
is automatically applied. Adding another -O3
didn't work since it is always here. (also validated in the commands)--nommq
didn't work either.GPU_TARGETS ?= gfx1100 $(shell $(ROCM_PATH)/llvm/bin/amdgpu-arch)
didn't work either.cmake
is faster than make
, (5.6 vs 6.7 t/s) regardless of -DCMAKE_BUILD_TYPE=Release
Probably hipblas not working? It's pretty easy to break in Linux. To restore, you need to restart the driver installation, but do not forget the flags to set Hip-libraries. That's exactly what happened to me: I didn't understand why stable diffusion it's so slow, before reinstallation HIP and rocm. Just make sure that rocm and HIP work
On my computer, using an rx 9700 xt I'm getting a decent 82 tokens/s, which is less than I get using mlc-llm (I usually get 90 to 105 tokens/second) :
$ ./main -m openchat_3.5-16k.Q2_K.gguf -n 512 -ngl 84
...
llama_print_timings: load time = 602,64 ms
llama_print_timings: sample time = 49,50 ms / 512 runs ( 0,10 ms per token, 10343,02 tokens per second)
llama_print_timings: prompt eval time = 0,00 ms / 1 tokens ( 0,00 ms per token, inf tokens per second)
llama_print_timings: eval time = 6213,93 ms / 512 runs ( 12,14 ms per token, 82,40 tokens per second)
llama_print_timings: total time = 6353,88 ms
Log end
I'm using Ubuntu 22.04 with mesa gpu driver! amdgpu driver had some issues and I switched back to mesa one. If you have an rx 7900 xtx then you should set ngl to 96.
There is one problem, if I'm not setting the -ngl 84
param it seems it defaults to 1 or a very low number and it's terribly slow ... is it possible to have a better auto detection ?
Here is the output difference between no_gl param and ngl params:
diff -Nru no_gl.txt ngl.txt
--- no_gl.txt 2023-11-17 18:41:18.800750412 +0200
+++ ngl.txt 2023-11-17 18:42:16.152175500 +0200
@@ -1,7 +1,7 @@
Log start
main: build = 1523 (947f64f)
main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
-main: seed = 1700239230
+main: seed = 1700239309
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 ROCm devices:
@@ -356,19 +356,23 @@
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0,11 MiB
llm_load_tensors: using ROCm for GPU acceleration
-llm_load_tensors: mem required = 2939,69 MiB
-llm_load_tensors: offloading 0 repeating layers to GPU
-llm_load_tensors: offloaded 0/35 layers to GPU
-llm_load_tensors: VRAM used: 0,00 MiB
+llm_load_tensors: mem required = 41,12 MiB
+llm_load_tensors: offloading 32 repeating layers to GPU
+llm_load_tensors: offloading non-repeating layers to GPU
+llm_load_tensors: offloaded 35/35 layers to GPU
+llm_load_tensors: VRAM used: 2898,56 MiB
..................................................................................................
llama_new_context_with_model: n_ctx = 512
llama_new_context_with_model: freq_base = 1000000,0
llama_new_context_with_model: freq_scale = 1
+llama_kv_cache_init: offloading v cache to GPU
+llama_kv_cache_init: offloading k cache to GPU
+llama_kv_cache_init: VRAM kv self = 64,00 MiB
llama_new_context_with_model: kv self size = 64,00 MiB
llama_build_graph: non-view tensors processed: 740/740
llama_new_context_with_model: compute buffer total size = 74,57 MiB
llama_new_context_with_model: VRAM scratch buffer: 73,00 MiB
-llama_new_context_with_model: total VRAM used: 73,00 MiB (model: 0,00 MiB, context: 73,00 MiB)
+llama_new_context_with_model: total VRAM used: 3035,57 MiB (model: 2898,56 MiB, context: 137,00 MiB)
system_info: n_threads = 16 / 32 | AVX = 1 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
sampling:
@@ -376,3 +380,4 @@
top_k = 40, tfs_z = 1,000, top_p = 0,950, min_p = 0,050, typical_p = 1,000, temp = 0,800
mirostat = 0, mirostat_lr = 0,100, mirostat_ent = 5,000
generate: n_ctx = 512, n_batch = 512, n_predict = 512, n_keep = 0
Ah, let me try mesa gpu driver as well...seems to be driver issues :(
is it possible to have a better auto detection ?
I have a similar issue with a CLBlast build on Windows, on my rx5700XT. Offloading layers to GPU causes a very significant slowdown, even compared to my slow CPU.
.\main.exe -m .\models\tinyllama\tinyllama-1.1b-chat-v0.3.Q2_K.gguf -c 4096 -p "This is a test prompt." -e -n 128 -ctk q4_0 -s 0 -t 4 -ngl 20
...
llama_print_timings: load time = 3765.29 ms
llama_print_timings: sample time = 36.10 ms / 128 runs ( 0.28 ms per token, 3545.90 tokens per second)
llama_print_timings: prompt eval time = 460.28 ms / 7 tokens ( 65.75 ms per token, 15.21 tokens per second)
llama_print_timings: eval time = 20259.19 ms / 127 runs ( 159.52 ms per token, 6.27 tokens per second)
llama_print_timings: total time = 20801.06 ms
.\main.exe -m .\models\tinyllama\tinyllama-1.1b-chat-v0.3.Q2_K.gguf -c 4096 -p "This is a test prompt." -e -n 128 -ctk q4_0 -s 0 -t 4 -ngl 0
...
llama_print_timings: load time = 298.46 ms
llama_print_timings: sample time = 42.78 ms / 128 runs ( 0.33 ms per token, 2991.77 tokens per second)
llama_print_timings: prompt eval time = 394.26 ms / 7 tokens ( 56.32 ms per token, 17.75 tokens per second)
llama_print_timings: eval time = 9202.12 ms / 127 runs ( 72.46 ms per token, 13.80 tokens per second)
llama_print_timings:
AMD in their new release of ROCm 6 will also do a lot of optimization around FP8 in hipBLASLt
(and in general recommends that library for use in ML), could that maybe be used to optimize further?
https://www.amd.com/content/dam/amd/en/documents/instinct-tech-docs/product-briefs/amd-rocm-6-brief.pdf
AMD in their new release of ROCm 6 will also do a lot of optimization around FP8 in
hipBLASLt
(and in general recommends that library for use in ML), could that maybe be used to optimize further? https://www.amd.com/content/dam/amd/en/documents/instinct-tech-docs/product-briefs/amd-rocm-6-brief.pdf
hipBLASlt's github says it requires an AMD MI200-MI300 Instinct Accelerator GPU
hipBLASlt's github says it requires an AMD MI200-MI300 Instinct Accelerator GPU
Too bad. Although it's AMD, so maybe I'll just try and build it on my XTX and see if it works anyway, wouldn't be the first time...
What still could be interesting is WMMA, I don't know if you can already take advantage of it using hipblas, but if not, this would probably really accelerate stuff on the RDNA3 cards, as they come with that built deep into the architecture: https://gpuopen.com/learn/wmma_on_rdna3/ (also supported by CUDA, so maybe you can accelerate both in one go)
Maybe in general, I'm not too knowledgeable there, but might it be useful to use something higher level like MIOpen and letting AMD do the optimization?
hipBLASlt's github says it requires an AMD MI200-MI300 Instinct Accelerator GPU
Too bad. Although it's AMD, so maybe I'll just try and build it on my XTX and see if it works anyway, wouldn't be the first time...
Definitely share your results, I also have a 7900xtx!
hipBLASlt's github says it requires an AMD MI200-MI300 Instinct Accelerator GPU
Too bad. Although it's AMD, so maybe I'll just try and build it on my XTX and see if it works anyway, wouldn't be the first time...
What still could be interesting is WMMA, I don't know if you can already take advantage of it using hipblas, but if not, this would probably really accelerate stuff on the RDNA3 cards, as they come with that built deep into the architecture: https://gpuopen.com/learn/wmma_on_rdna3/ (also supported by CUDA, so maybe you can accelerate both in one go)
Maybe in general, I'm not too knowledgeable there, but might it be useful to use something higher level like MIOpen and letting AMD do the optimization?
When it comes to compiling models for specific architectures, you can always look at mlc: https://llm.mlc.ai/docs/index.html, I believe they're using an Apache compiler/framework to compile a model into extremely optimized code. Not sure if it's using MIGraphX or MIOpen or ... under the hood.
Inference performance using mlc is basically the best you can get on consumer hardware as far as my knowledge goes. Better than llama.cpp and better than exllama. Not sure if it's fit for enterprise applications and servers, but most of us aren't doing that anyway.
When it comes to compiling models for specific architectures, you can always look at mlc: https://llm.mlc.ai/docs/index.html, I believe they're using an Apache compiler/framework to compile a model into extremely optimized code. Not sure if it's using MIGraphX or MIOpen or ... under the hood.\n\nInference performance using mlc is basically the best you can get on consumer hardware as far as my knowledge goes. Better than llama.cpp and better than exllama. Not sure if it's fit for enterprise applications and servers, but most of us aren't doing that anyway.
Well I'm using TabbyML, which is currently bound to using llama.cpp, so that's why I'm stuck with that (which also means I work with different models than just llama 2). Maybe it's time to change that...
Definitely share your results, I also have a 7900xtx!
Nope, doesn't work, it requires amdhsa_accum_offset
which is only available on CDNA2+ it seems.
@oliverhu I guess you have an Intel CPU compter :) I can reproduce the similar issues on my PC with Intel CPU, and I think it caused by the Makefile build hipBLAS issues, which make the model layers GPU offload doesn't really work.
But cmake based build creates the correct binary and work well.
Yeah, I have Intel CPU with AMD GPU, haven’t got a chance to dig deeper
@arch-btw I'm also using RX560 and speed is same as CPU. what speed did you have before and after the fix?
I'm also using RX560 and speed is same as CPU. what speed did you have before and after the fix?
Just a general FYI, if you have a pretty modern CPU, that's probably even expected behavior, as it's a 7 year old tiny 14nm GPU. Plus I don't think ROCm supports it anymore, so it might not even be using the GPU. I think Vega is the oldest it does.
I was referring to i7-7700HQ, so not really expected but yes, I doubt GPU is truly active, which is why I'm asking about your speed
Testing with a Radeon Instinct MI25, is actually quite slow. ✦ ✖ ./llama-bench --model ./models/amethyst-13b-mistral.Q4_K_M.gguf ggml_opencl: selecting platform: 'rusticl' ggml_opencl: selecting device: 'AMD Radeon Instinct MI25 (radeonsi, vega10, LLVM 15.0.7, DRM 3.54, 6.5.0-14-generic)' | model | size | params | backend | ngl | test | t/s |
---|---|---|---|---|---|---|---|
llama 13B Q4_K - Medium | 7.33 GiB | 13.02 B | OpenCL | 99 | pp 512 | 43.12 ± 0.46 | |
llama 13B Q4_K - Medium | 7.33 GiB | 13.02 B | OpenCL | 99 | tg 128 | 3.51 ± 0.67 |
build: cfc4d75d (2564)
ggml_opencl: selecting platform: 'rusticl'
Well, you tested with openCL, so that's kinda expected. You want to use hipBLAS.
This issue was closed because it has been inactive for 14 days since being marked as stale.
Coming from the ollama repo, maybe ROC_ENABLE_PRE_VEGA=1
would fix? https://github.com/ollama/ollama/issues/2453#issuecomment-2113329084
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Expected Behavior
GPU inference should be faster than CPU.
Current Behavior
I have 13900K CPU & 7900XTX 24G hardware. I built llama.cpp using the hipBLAS and it builds. However, I noticed that when I offload all layers to GPU, it is noticably slower
GPU
CPU
Environment and Context
CPU: i9-13900KF OS: Linux pia 6.2.0-33-generic #33~22.04.1-Ubuntu GPU: 7900XTX Python: 3.10 g++: 11.4.0 Make: 4.3
Build command
rocminfo
Additional comparison between Nvidia RTX 4700 ti vs RX7900XTX
I further tested RTX 4700 TI... it is probably 10x faster than RX7900XTX...
Nvidia GPU (4700TI)
4700ti 56.23 tokens
7900XTX 5.62 tokens per second