ollama / ollama

Get up and running with Llama 3.2, Mistral, Gemma 2, and other large language models.
https://ollama.com
MIT License
93.67k stars 7.4k forks source link

Ollama is not using the 100% of RTX4000 VRAM (18 of 20GB) #3078

Open nfsecurity opened 7 months ago

nfsecurity commented 7 months ago

Hi, thank you for the wonderful ollama project and the amazing community!

Screenshot 2024-03-12 at 8 32 31 AM

I am testing the Mixtral 3Bit Quantized model under a RTX400 with 20GB of VRAM. The model is 20GB of size and as you can see in the screenshot of nvidia-smi, ollama is using only 18GB and the rest of the model was loaded to the system RAM.

Is this normal? or is it an issue?, Can I force ollama to use the 100% of VRAM?, thank you!!!

orlyandico commented 7 months ago

Adding my report here, seems to be a similar issue.

I'm getting less than 1 token per second with 2x P40 and Smaug-72B-v0.1-q4_k_m.gguf quantised model from HuggingFace (4.84 bpw). CPU is at 400%, GPU's hover at 20-40% CPU utilisation, log says only 65 of 81 layers are offloaded to the GPU; the model is 40GB in size, 16GB on each GPU is used for the model and 2GB for the KV cache, total of 18GB VRAM per GPU verified by nvidia-smi. Total of 36GB, but I have 48GB in total. I figure if all of the 81 layers were on the GPU, this would use 20GB of VRAM, leaving 4GB for KV cache.

Logs:

Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: format           = GGUF V3 (latest)
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: arch             = llama
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: vocab type       = BPE
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: n_vocab          = 152064
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: n_merges         = 151387
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: n_ctx_train      = 32768
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: n_embd           = 8192
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: n_head           = 64
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: n_head_kv        = 64
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: n_layer          = 80
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: n_rot            = 128
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: n_embd_head_k    = 128
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: n_embd_head_v    = 128
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: n_gqa            = 1
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: n_embd_k_gqa     = 8192
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: n_embd_v_gqa     = 8192
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: f_norm_eps       = 0.0e+00
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: n_ff             = 24576
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: n_expert         = 0
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: n_expert_used    = 0
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: pooling type     = 0
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: rope type        = 0
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: rope scaling     = linear
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: freq_base_train  = 1000000.0
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: freq_scale_train = 1
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: n_yarn_orig_ctx  = 32768
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: rope_finetuned   = unknown
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: model type       = 65B
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: model ftype      = Q4_K - Medium
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: model params     = 72.29 B
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: model size       = 40.76 GiB (4.84 BPW)
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: general.name     = snapshots
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: BOS token        = 151643 '<|endoftext|>'
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: EOS token        = 151643 '<|endoftext|>'
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: UNK token        = 151643 '<|endoftext|>'
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: PAD token        = 151643 '<|endoftext|>'
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_print_meta: LF token         = 148848 'ÄĬ'
Mar 12 17:18:09 ThinkStation-S30 ollama[16976]: llm_load_tensors: ggml ctx size =    1.19 MiB
Mar 12 17:18:11 ThinkStation-S30 ollama[16976]: llm_load_tensors: offloading 65 repeating layers to GPU
Mar 12 17:18:11 ThinkStation-S30 ollama[16976]: llm_load_tensors: offloaded 65/81 layers to GPU
Mar 12 17:18:11 ThinkStation-S30 ollama[16976]: llm_load_tensors:        CPU buffer size = 41737.81 MiB
Mar 12 17:18:11 ThinkStation-S30 ollama[16976]: llm_load_tensors:      CUDA0 buffer size = 16176.19 MiB
Mar 12 17:18:11 ThinkStation-S30 ollama[16976]: llm_load_tensors:      CUDA1 buffer size = 16170.00 MiB

nvidia-smi output:

Every 1.0s: nvidia-smi                         ThinkStation-S30: Tue Mar 12 17:19:48 2024

Tue Mar 12 17:19:48 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.23.08              Driver Version: 545.23.08    CUDA Version: 12.3     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  Tesla P40                      On  | 00000000:02:00.0 Off |                  Off |
| N/A   63C    P0              54W / 170W |  18900MiB / 24576MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   1  Tesla P40                      On  | 00000000:06:00.0 Off |                  Off |
| N/A   63C    P0              52W / 170W |  18828MiB / 24576MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A     16976      C   /usr/local/bin/ollama                     18896MiB |
|    1   N/A  N/A     16976      C   /usr/local/bin/ollama                     18824MiB |
+---------------------------------------------------------------------------------------+
orlyandico commented 7 months ago

The more-aggressively quantised version Smaug-72B-v0.1-q2_k.gguf loads entirely on the GPU, runs at 100% GPU and 40-60% GPU compute on each of the two P40's.

It seems that the rounding logic for determining how many layers to put on the GPU is excessively conservative, but I can't find it in the source code...

Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: format           = GGUF V3 (latest)
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: arch             = llama
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: vocab type       = BPE
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: n_vocab          = 152064
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: n_merges         = 151387
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: n_ctx_train      = 32768
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: n_embd           = 8192
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: n_head           = 64
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: n_head_kv        = 64
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: n_layer          = 80
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: n_rot            = 128
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: n_embd_head_k    = 128
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: n_embd_head_v    = 128
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: n_gqa            = 1
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: n_embd_k_gqa     = 8192
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: n_embd_v_gqa     = 8192
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: f_norm_eps       = 0.0e+00
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: n_ff             = 24576
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: n_expert         = 0
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: n_expert_used    = 0
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: pooling type     = 0
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: rope type        = 0
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: rope scaling     = linear
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: freq_base_train  = 1000000.0
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: freq_scale_train = 1
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: n_yarn_orig_ctx  = 32768
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: rope_finetuned   = unknown
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: model type       = 65B
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: model ftype      = Q2_K - Medium
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: model params     = 72.29 B
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: model size       = 28.26 GiB (3.36 BPW)
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: general.name     = snapshots
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: BOS token        = 151643 '<|endoftext|>'
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: EOS token        = 151643 '<|endoftext|>'
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: UNK token        = 151643 '<|endoftext|>'
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: PAD token        = 151643 '<|endoftext|>'
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_print_meta: LF token         = 148848 'ÄĬ'
Mar 12 20:37:13 ThinkStation-S30 ollama[43824]: llm_load_tensors: ggml ctx size =    1.19 MiB
Mar 12 20:39:31 ThinkStation-S30 ollama[43824]: llm_load_tensors: offloading 80 repeating layers to GPU
Mar 12 20:39:31 ThinkStation-S30 ollama[43824]: llm_load_tensors: offloading non-repeating layers to GPU
Mar 12 20:39:31 ThinkStation-S30 ollama[43824]: llm_load_tensors: offloaded 81/81 layers to GPU
Mar 12 20:39:31 ThinkStation-S30 ollama[43824]: llm_load_tensors:        CPU buffer size =   389.81 MiB
Mar 12 20:39:31 ThinkStation-S30 ollama[43824]: llm_load_tensors:      CUDA0 buffer size = 14132.19 MiB
Mar 12 20:39:31 ThinkStation-S30 ollama[43824]: llm_load_tensors:      CUDA1 buffer size = 14417.38 MiB
Mar 12 20:39:38 ThinkStation-S30 ollama[43824]: ...................................................................>
Mar 12 20:39:38 ThinkStation-S30 ollama[43824]: llama_new_context_with_model: n_ctx      = 2048
Mar 12 20:39:38 ThinkStation-S30 ollama[43824]: llama_new_context_with_model: freq_base  = 1000000.0
Mar 12 20:39:38 ThinkStation-S30 ollama[43824]: llama_new_context_with_model: freq_scale = 1
Mar 12 20:39:38 ThinkStation-S30 ollama[43824]: ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   yes
Mar 12 20:39:38 ThinkStation-S30 ollama[43824]: ggml_init_cublas: CUDA_USE_TENSOR_CORES: no
Mar 12 20:39:38 ThinkStation-S30 ollama[43824]: ggml_init_cublas: found 2 CUDA devices:
Mar 12 20:39:38 ThinkStation-S30 ollama[43824]:   Device 0: Tesla P40, compute capability 6.1, VMM: yes
Mar 12 20:39:38 ThinkStation-S30 ollama[43824]:   Device 1: Tesla P40, compute capability 6.1, VMM: yes
Mar 12 20:39:38 ThinkStation-S30 ollama[43824]: llama_kv_cache_init:      CUDA0 KV buffer size =  2624.00 MiB
Mar 12 20:39:38 ThinkStation-S30 ollama[43824]: llama_kv_cache_init:      CUDA1 KV buffer size =  2496.00 MiB
Mar 12 20:39:38 ThinkStation-S30 ollama[43824]: llama_new_context_with_model: KV self size  = 5120.00 MiB, K (f16):>
Mar 12 20:39:38 ThinkStation-S30 ollama[43824]: llama_new_context_with_model:  CUDA_Host input buffer size   =    2>
Mar 12 20:39:38 ThinkStation-S30 ollama[43824]: llama_new_context_with_model:      CUDA0 compute buffer size =   32>
Mar 12 20:39:38 ThinkStation-S30 ollama[43824]: llama_new_context_with_model:      CUDA1 compute buffer size =   32>
Mar 12 20:39:38 ThinkStation-S30 ollama[43824]: llama_new_context_with_model:  CUDA_Host compute buffer size =    1>
Mar 12 20:39:38 ThinkStation-S30 ollama[43824]: llama_new_context_with_model: graph splits (measure): 3
Mar 12 20:39:38 ThinkStation-S30 ollama[43824]: loading library /tmp/ollama506445724/cuda_v11/libext_server.so
Mar 12 20:39:38 ThinkStation-S30 ollama[43824]: {"function":"initialize","level":"INFO","line":433,"msg":"initializ>
Mar 12 20:39:38 ThinkStation-S30 ollama[43824]: {"function":"initialize","level":"INFO","line":442,"msg":"new slot">
Mar 12 20:39:38 ThinkStation-S30 ollama[43824]: time=2024-03-12T20:39:38.488Z level=INFO source=dyn_ext_server.go:1>
Mar 12 20:39:38 ThinkStation-S30 ollama[43824]: {"function":"update_slots","level":"INFO","line":1565,"msg":"all sl>
Mar 12 20:39:38 ThinkStation-S30 ollama[43824]: [GIN] 2024/03/12 - 20:39:38 | 200 |         2m27s |       127.0.0.1>
Mar 12 20:39:58 ThinkStation-S30 ollama[43824]: {"function":"launch_slot_with_data","level":"INFO","line":823,"msg">
Mar 12 20:39:58 ThinkStation-S30 ollama[43824]: {"function":"update_slots","level":"INFO","line":1796,"msg":"slot p>
Mar 12 20:39:58 ThinkStation-S30 ollama[43824]: {"function":"update_slots","level":"INFO","line":1821,"msg":"kv cac>
Mar 12 20:42:31 ThinkStation-S30 ollama[43824]: {"function":"print_timings","level":"INFO","line":257,"msg":"prompt>
Mar 12 20:42:31 ThinkStation-S30 ollama[43824]: {"function":"print_timings","level":"INFO","line":271,"msg":"genera>
Mar 12 20:42:31 ThinkStation-S30 ollama[43824]: {"function":"print_timings","level":"INFO","line":281,"msg":"      >
Mar 12 20:42:31 ThinkStation-S30 ollama[43824]: {"function":"update_slots","level":"INFO","line":1627,"msg":"slot r>
Mar 12 20:42:31 ThinkStation-S30 ollama[43824]: [GIN] 2024/03/12 - 20:42:31 | 200 |         2m33s |       127.0.0.1>
AdaptiveStep commented 7 months ago

I wish if it could say "Using only CPU" or "Using only GPU" . Or "Slow inference detected" .. or stuff like that. We are not supposed to discover these things by accident.

orlyandico commented 7 months ago

This part tells us the model is running entirely (or almost entirely) on the GPU:

Mar 12 20:39:31 ThinkStation-S30 ollama[43824]: llm_load_tensors: offloading 80 repeating layers to GPU
Mar 12 20:39:31 ThinkStation-S30 ollama[43824]: llm_load_tensors: offloading non-repeating layers to GPU
Mar 12 20:39:31 ThinkStation-S30 ollama[43824]: llm_load_tensors: offloaded 81/81 layers to GPU
Mar 12 20:39:31 ThinkStation-S30 ollama[43824]: llm_load_tensors:        CPU buffer size =   389.81 MiB
Mar 12 20:39:31 ThinkStation-S30 ollama[43824]: llm_load_tensors:      CUDA0 buffer size = 14132.19 MiB
Mar 12 20:39:31 ThinkStation-S30 ollama[43824]: llm_load_tensors:      CUDA1 buffer size = 14417.38 MiB

And this disaster is telling us that at least some of the layers are evaluating on the CPU. Interestingly even though 32GB of the 40GB model is loaded on the GPU's, the entire model is also loaded into host CPU RAM...

Mar 12 17:18:11 ThinkStation-S30 ollama[16976]: llm_load_tensors: offloading 65 repeating layers to GPU
Mar 12 17:18:11 ThinkStation-S30 ollama[16976]: llm_load_tensors: offloaded 65/81 layers to GPU
Mar 12 17:18:11 ThinkStation-S30 ollama[16976]: llm_load_tensors:        CPU buffer size = 41737.81 MiB
Mar 12 17:18:11 ThinkStation-S30 ollama[16976]: llm_load_tensors:      CUDA0 buffer size = 16176.19 MiB
Mar 12 17:18:11 ThinkStation-S30 ollama[16976]: llm_load_tensors:      CUDA1 buffer size = 16170.00 MiB
nfsecurity commented 7 months ago

So, is it a bug in the VRAM memory allocation or is it the expected behavior ?

orlyandico commented 7 months ago

I was able to fix this by RTFM.

You can set the num_gpu parameter to force a certain number of layers onto the GPU (s).

For Smaug-72B 4-bit quantised:

orly@ThinkStation-S30:~$ ollama run smaug-72b:latest
>>> /set parameter num_gpu 81
Set parameter 'num_gpu' to '81'

If you have the GGUF file you can also add the number of GPU layers to the modelfile before importing it into Ollama, e.g.

FROM ./Smaug-72B-v0.1-q4_k_m.gguf 
PARAMETER num_gpu 81
TEMPLATE "[INST] {{ .Prompt }} [/INST]"

(I knew from the Ollama logs that this particular GGUF had 81 layers)

What I noticed is that with Smaug-72b 4-bit quantised (4.85bpw) and all of the 81 layers on the GPU, each of my P40's has less than 1GB of free RAM when inferencing. So things (barely) fit. I am guessing the Ollama default estimation algorithm is overly conservative because you will get a crash if you OOM the GPU(s). You can try setting the parameter manually on your setup/model and see if it works.

nfsecurity commented 7 months ago

Thank you @orlyandico, I was able to increase by 2 the number of layers deployed in the VRAM GPU using Mixtral 3BitQ (from 29/33 to 31/33). I gained some performance in tokens per second. Now, ollama process is using more memory:

Screenshot 2024-03-18 at 7 25 22 PM

Your advice about OOM is very important, we have to monitor carefully the memory usage.

orlyandico commented 7 months ago

You won't get the full benefit of GPU unless all the layers are on the GPU. You might be better off using a slightly more quantized model e.g. 3bpw instead of 4bpw, so everything can fit on the GPU. But since you're already using a 3bpw model... probably not a great idea.

strikeoncmputrz commented 6 months ago

@orlyandico did you just guess on the number of layers to offload after seeing 81 total in Ollama logs, or is there some algorithm one can apply to estimate? I've stuck with exl2 and smaller models because I don't want to futz around with guessing num_layers / num_gpu and cranking my CPUs up to 100% repeatedly while testing but Mixtral8x22 has me reconsidering offloading some layers to CPU :-)

orlyandico commented 6 months ago

the logs display the number of layers.

it was just a guess on my part that all the layers fit.

incidentally i tried to do the same thing on an A6000-ada (rented). smaug-72b doesn’t fit. the A6000-ada 48GB has 45GB of memory reported by Nvidia-SMI. I guess because it has a video output so some VRAM is needed by the framebuffer.

On Fri, 12 Apr 2024 at 02:38, strikeoncmputrz @.***> wrote:

@orlyandico https://github.com/orlyandico did you just guess on the number of layers or is there some algorithm one can apply to estimate? I've stuck with exl2 and smaller models because I don't want to futz around with guessing num_layers / num_gpu and cranking my CPUs up to 100% repeatedly while testing but Mixtral8x22 has me reconsidering offloading some layers to CPU :-)

— Reply to this email directly, view it on GitHub https://github.com/ollama/ollama/issues/3078#issuecomment-2050813641, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKDS3EWAHCWDOPKEKP3NYLY443JLAVCNFSM6AAAAABESINW5KVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANJQHAYTGNRUGE . You are receiving this because you were mentioned.Message ID: @.***>

orlyandico commented 6 months ago

The logs showed the number of layers loaded on the GPUs, and nvidia-smi displayed the VRAM consumption. The logs also displayed how much memory was used for KV cache.

So, take VRAM, subtract KV cache, and what’s left is what the model took. By looking at the # of layers loaded and VRAM used, I extrapolated that all 81 would still fit.

On the A6000-Ada, all layers actually fit in the 45GB VRAM…. but there’s nothing left for KV cache! so ironically Smaug-72b, Qwen-72b 4bpw fit on 2x24GB P40, but do NOT fit on 1x 48GB A6000-Ada….

On Fri, 12 Apr 2024 at 02:38, strikeoncmputrz @.***> wrote:

@orlyandico https://github.com/orlyandico did you just guess on the number of layers or is there some algorithm one can apply to estimate? I've stuck with exl2 and smaller models because I don't want to futz around with guessing num_layers / num_gpu and cranking my CPUs up to 100% repeatedly while testing but Mixtral8x22 has me reconsidering offloading some layers to CPU :-)

— Reply to this email directly, view it on GitHub https://github.com/ollama/ollama/issues/3078#issuecomment-2050813641, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKDS3EWAHCWDOPKEKP3NYLY443JLAVCNFSM6AAAAABESINW5KVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANJQHAYTGNRUGE . You are receiving this because you were mentioned.Message ID: @.***>

JKratto commented 1 month ago

Hi there,

I've also noticed some changes in performance after a previous upgrade of Ollama (it's been 6-10 weeks), and I'd like to share my experience. I hope it can help diagnose and solve the issue.

My system: 5x 8GB Quadro RTX 4000

Previously, with mixtral 8x7B q4 on all the GPUs (no layers offloaded to CPU), I achieved a performance of approximately 23 t/s. However, after one day of upgrading Ollama, I observed that the same configuration resulted in mitral 8x7B q4 not loading on the GPUs only. Instead, 3 out of 33 layers were offloaded to the CPU, causing a significant performance drop to about 10 t/s. If I manually set /set parameter num_gpu 33, the model loads to the VRAM only again, and the performance returns to 23 t/s. This behavior is consistent even when using only 4 GPUs; the model still fits with a manually set num_gpu value of 33 and no performance drop. Just as an interesting data point I observe GPU compute utilization rates of around 15-20% (5 GPU inference) and 20-25% (4 GPU inference).

Based on my calculations, Mixtral should fit into about 23 GB of VRAM; however, considering the memory required for cache, etc., and the VRAM utilization rates with 5 GPUs peaking around 74% (most utilized GPU) and about 94% with 4 GPUs (about 84% average), I believe that 4 GPUs represent the minimum viable configuration. I recently set up a 6 GPU system, where Ollama loads all layers into VRAM by default. Almost 50 % of the VRAM is free causing significant inefficiency.

As a side line, I am using Ollama with the Open WebUI, and this setup makes loading the default model with 33/33 layers offloaded to GPU challenging (the num_gpu option was added recently but is per chat only and while I can change the setting, I can't really ask this of other users who certainly are not power-users). It appears that Olama is too cautious when loading models into VRAM. Is there a way around this, or should Olama be more eager to load models into VRAM?

orlyandico commented 1 month ago

I recently tried the (bundled) Llama3-70B from the Ollama repo and it fits without issue on 48GB VRAM, no offloading to CPU. I do think that Ollama is too cautious/conservative for stability reasons, but that the estimation algorithm may (or does) change between versions. I am guessing that setting num_gpu manually is the only foolproof mechanism (you can create a new model with the setting embedded in the Modelfile). This is at the risk of crashing Ollama if something else changes (e.g. the KV cache algorithm wants more memory).