Open regularRandom opened 4 days ago
Each prompt adds 20-30GB of RAM
What sort of prompt are you sending? What client are you using?
@regularRandom could you also try out the new 0.4.0 RC release to see if that changes behavior at all? I'm not sure yet which layer this memory issue is taking place at, but prompt processing and caching has been reworked in the 0.4.0 release.
Each prompt adds 20-30GB of RAM
What sort of prompt are you sending? What client are you using?
Well, homework questions like "How did the fear of communism come to affect American foreign policy in regard to "containment" and what were there critiques of containment?" - all well prepared. Client is open-webui, docker.
This process remains in memory after the each execution:
135376 ollama 20 0 65.5g 17.5g 371640 S 0.0 14.0 31:26.44 ollama_llama_se
and new entry in dmesg:
[Mon Oct 28 21:16:03 2024] Cannot map memory with base addr 0x7f37c8000000 and size of 0x43edf1 pages
What model are you using, and do you set any parameters like num_ctx
or num_gpu
?
Server logs may also help in debugging.
I'm also having the same issue since I installed ollama about a month ago, but it seems it only happens for partially offloaded models and not when the models are fully loaded in VRAM.
My system: Arch Linux Linux 6.11.4-arch2-1 CPU: AMD Ryzen 7 5700X 8-Core Processor 16 virtual cores. 64Gb DDR4 RAM 2 x NVIDIA RTX 3060 12Gb VRAM -> total 24GB VRAM.
After booting and before loading any model my RAM usage is about 12G (there's a web server, PHP process managers, a MariaDB Server and some ZFS pools using some Gb of RAM as ARC cache).
For example, as I said before, if I use models that fit entirely in my GPUS I can make many requests as I want without any aparent memory loss. VRAM is allocated and freed normally if I run nvidia-smi during requests and when the model unloads.
But for example, if I load Meta-Llama-3.1-70B-Instruct.Q4_K_M (from [https://huggingface.co/MaziyarPanahi/Meta-Llama-3.1-70B-Instruct-GGUF]https://huggingface.co/MaziyarPanahi/Meta-Llama-3.1-70B-Instruct-GGUF), imported to ollama using the following modelfile text: FROM ./MaziyarPanahi/Meta-Llama-3.1-70B-Instruct-GGUF/Meta-Llama-3.1-70B-Instruct.Q4_K_M.gguf
I ask for example "Tell me a random fun fact about the Roman Empire", and then i run ollama stop
oct 29 13:35:48 morty ollama[43973]: time=2024-10-29T13:35:48.955+01:00 level=INFO source=server.go:105 msg="systemmemory" total="62.7 GiB" free="32.5 GiB" free_swap="57.6 GiB" oct 29 13:35:49 morty ollama[43973]: time=2024-10-29T13:35:48.956+01:00 level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=36 layers.split=19,17 memory.available="[11.5 GiB 10.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="45.0 GiB" memory.required.partial="21.6 GiB" memory.required.kv="640.0 MiB" memory.required.allocations="[11.2 GiB 10.3 GiB]" memory.weights.total="38.9 GiB" memory.weights.repeating="38.1 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" oct 29 13:35:49 morty ollama[43973]: time=2024-10-29T13:35:48.956+01:00 level=INFO source=server.go:388 msg="starting llama server" cmd="/tmp/ollama346011471/runners/cuda_v12/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-3f16ab17da4521fe3ed7c5d7beed960d3fe7b5b64421ee9650aa53d6b649ccab --ctx-size 2048 --batch-size 512 --embedding --n-gpu-layers 36 --threads 8 --no-mmap --parallel 1 --tensor-split 19,17 --port 45727" oct 29 13:35:49 morty ollama[43973]: time=2024-10-29T13:35:48.956+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 oct 29 13:35:49 morty ollama[43973]: time=2024-10-29T13:35:48.956+01:00 level=INFO source=server.go:587 msg="waiting for llama runner to start responding" oct 29 13:35:49 morty ollama[43973]: time=2024-10-29T13:35:48.956+01:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error" oct 29 13:35:49 morty ollama[87850]: INFO [main] starting c++ runner | tid="131942650417152" timestamp=1730205348 oct 29 13:35:49 morty ollama[87850]: INFO [main] build info | build=10 commit="3a8c75e" tid="131942650417152" timestamp=1730205348 oct 29 13:35:49 morty ollama[87850]: INFO [main] system info | n_threads=8 n_threads_batch=8 system_info="AVX = 1 |AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 |VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="131942650417152" timestamp=1730205348 total_threads=16 oct 29 13:35:49 morty ollama[87850]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="45727" tid="131942650417152" timestamp=1730205348 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: loaded meta data with 33 key-value pairs and 723 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-3f16ab17da4521fe3ed7c5d7beed960d3fe7b5b64421ee9650aa53d6b649ccab (version GGUF V3 (latest)) oct 29 13:35:49 morty ollama[43973]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 0: general.architecture str = llama oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 1: general.type str = model oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 2: general.name str = Models Meta Llama Meta Llama 3.1 70B ... oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 3: general.finetune str = Instruct oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 4: general.basename str = models-meta-llama-Meta-Llama-3.1 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 5: general.size_label str = 70B oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 6: general.license str = llama3.1 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 9: llama.block_count u32 = 80 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 10: llama.context_length u32 = 131072 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 11: llama.embedding_length u32 = 8192 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 12: llama.feed_forward_length u32 = 28672 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 13: llama.attention.head_count u32 = 64 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 17: general.file_type u32 = 15 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 18: llama.vocab_size u32 = 128256 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 21: tokenizer.ggml.pre str = smaug-bpe oct 29 13:35:49 morty ollama[43973]: time=2024-10-29T13:35:49.207+01:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 27: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 28: general.quantization_version u32 = 2 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 29: quantize.imatrix.file str = ./Meta-Llama-3.1-70B-Instruct-GGUF_im... oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 30: quantize.imatrix.dataset str = group_40.txt oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 31: quantize.imatrix.entries_count i32 = 560 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 32: quantize.imatrix.chunks_count i32 = 68 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - type f32: 161 tensors oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - type q4_K: 441 tensors oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - type q5_K: 40 tensors oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - type q6_K: 81 tensors oct 29 13:35:49 morty ollama[43973]: llm_load_vocab: special tokens cache size = 256 oct 29 13:35:49 morty ollama[43973]: llm_load_vocab: token to piece cache size = 0.7999 MB oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: format = GGUF V3 (latest) oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: arch = llama oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: vocab type = BPE oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_vocab = 128256 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_merges = 280147 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: vocab_only = 0 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_ctx_train = 131072 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_embd = 8192 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_layer = 80 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_head = 64 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_head_kv = 8 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_rot = 128 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_swa = 0 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_embd_head_k = 128 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_embd_head_v = 128 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_gqa = 8 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_embd_k_gqa = 1024 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_embd_v_gqa = 1024 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: f_norm_eps = 0.0e+00 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: f_logit_scale = 0.0e+00 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_ff = 28672 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_expert = 0 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_expert_used = 0 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: causal attn = 1 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: pooling type = 0 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: rope type = 0 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: rope scaling = linear oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: freq_base_train = 500000.0 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: freq_scale_train = 1 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_ctx_orig_yarn = 131072 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: rope_finetuned = unknown oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: ssm_d_conv = 0 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: ssm_d_inner = 0 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: ssm_d_state = 0 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: ssm_dt_rank = 0 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: ssm_dt_b_c_rms = 0 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: model type = 70B oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: model ftype = Q4_K - Medium oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: model params = 70.55 B oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: general.name = Models Meta Llama Meta Llama 3.1 70B Instruct oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: EOS token = 128009 '<|eot_id|>' oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: LF token = 128 'Ä' oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: EOT token = 128009 '<|eot_id|>' oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: EOM token = 128008 '<|eom_id|>' oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: EOG token = 128008 '<|eom_id|>' oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: EOG token = 128009 '<|eot_id|>' oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: max token length = 256 oct 29 13:35:49 morty ollama[43973]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no oct 29 13:35:49 morty ollama[43973]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no oct 29 13:35:49 morty ollama[43973]: ggml_cuda_init: found 2 CUDA devices: oct 29 13:35:49 morty ollama[43973]: Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes oct 29 13:35:49 morty ollama[43973]: Device 1: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes oct 29 13:35:49 morty ollama[43973]: llm_load_tensors: ggml ctx size = 1.01 MiB oct 29 13:35:50 morty ollama[43973]: time=2024-10-29T13:35:50.661+01:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding" oct 29 13:35:57 morty ollama[43973]: time=2024-10-29T13:35:57.224+01:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" oct 29 13:35:57 morty ollama[43973]: ggml_cuda_host_malloc: failed to allocate 22863.42 MiB of pinned memory: invalid argument oct 29 13:35:57 morty ollama[43973]: llm_load_tensors: offloading 36 repeating layers to GPU oct 29 13:35:57 morty ollama[43973]: llm_load_tensors: offloaded 36/81 layers to GPU oct 29 13:35:57 morty ollama[43973]: llm_load_tensors: CPU buffer size = 22863.42 MiB oct 29 13:35:57 morty ollama[43973]: llm_load_tensors: CUDA0 buffer size = 9094.06 MiB oct 29 13:35:57 morty ollama[43973]: llm_load_tensors: CUDA1 buffer size = 8585.62 MiB oct 29 13:40:43 morty ollama[43973]: llama_new_context_with_model: n_ctx = 2048 oct 29 13:40:43 morty ollama[43973]: llama_new_context_with_model: n_batch = 512 oct 29 13:40:43 morty ollama[43973]: llama_new_context_with_model: n_ubatch = 512 oct 29 13:40:43 morty ollama[43973]: llama_new_context_with_model: flash_attn = 0 oct 29 13:40:43 morty ollama[43973]: llama_new_context_with_model: freq_base = 500000.0 oct 29 13:40:43 morty ollama[43973]: llama_new_context_with_model: freq_scale = 1 oct 29 13:40:43 morty ollama[43973]: llama_kv_cache_init: CUDA_Host KV buffer size = 352.00 MiB oct 29 13:40:43 morty ollama[43973]: llama_kv_cache_init: CUDA0 KV buffer size = 152.00 MiB oct 29 13:40:43 morty ollama[43973]: llama_kv_cache_init: CUDA1 KV buffer size = 136.00 MiB oct 29 13:40:43 morty ollama[43973]: llama_new_context_with_model: KV self size = 640.00 MiB, K (f16): 320.00 MiB, V (f16): 320.00 MiB oct 29 13:40:43 morty ollama[43973]: llama_new_context_with_model: CUDA_Host output buffer size = 0.52 MiB oct 29 13:40:43 morty ollama[43973]: llama_new_context_with_model: CUDA0 compute buffer size = 1088.45 MiB oct 29 13:40:43 morty ollama[43973]: llama_new_context_with_model: CUDA1 compute buffer size = 324.00 MiB oct 29 13:40:43 morty ollama[43973]: llama_new_context_with_model: CUDA_Host compute buffer size = 20.01 MiB oct 29 13:40:43 morty ollama[43973]: llama_new_context_with_model: graph nodes = 2566 oct 29 13:40:43 morty ollama[43973]: llama_new_context_with_model: graph splits = 489 INFO [main] model loaded | tid="131942650417152" timestamp=1730205682 oct 29 13:41:22 morty ollama[43973]: time=2024-10-29T13:41:22.396+01:00 level=INFO source=server.go:626 msg="llama runner started in 333.38 seconds"
Then, when the output is being generated, my RAM usage raises to 61Gb of RAM amd 10Gb of swap, and when it finalizes, the used RAM only goes down to 40Gb instead of coming back to less than 20Gb. But the most intriguing is that I can't see what process is using that RAM. For example 'htop' sorted by RAM usage:
Output of ps aux | awk '{print $6/1024 " MB\t\t" $11}' | sort -n ... 5.82031 MB systemd-userwork: 5.86328 MB systemd-userwork: 5.99609 MB /usr/lib/flatpak-session-helper 6.10547 MB xdg-dbus-proxy 6.23047 MB /opt/google/chrome/chrome 63.9414 MB /opt/google/chrome/chrome 64.4922 MB /opt/google/chrome/chrome 6.82422 MB /usr/bin/mariadbd 69.4648 MB /usr/bin/konsole 7.06641 MB /usr/lib/evolution-data-server/evolution-alarm-notify 7.08203 MB /usr/lib/udisks2/udisksd 7.19531 MB /opt/google/chrome/chrome 7.20703 MB /opt/google/chrome/chrome 7.27344 MB journalctl 7.52734 MB awk 7.55859 MB /opt/spotify/spotify 7.58203 MB sort 7.69141 MB /opt/google/chrome/chrome 7.72266 MB /usr/lib/xdg-desktop-portal 7.85156 MB /opt/google/chrome/chrome 7.94922 MB /usr/lib/gvfs-udisks2-volume-monitor 8.05078 MB ps 8.05859 MB /usr/bin/fcitx5 0.820312 MB /usr/bin/vmnet-netifup 8.43359 MB /usr/bin/sddm 8.44141 MB /opt/spotify/spotify 8.46484 MB /opt/google/chrome/chrome 8.67578 MB /usr/bin/redis-server 8.73047 MB /opt/spotify/spotify 8.82422 MB /usr/lib/systemd/systemd 0.886719 MB /usr/bin/vmnet-netifup 88.9023 MB /opt/google/chrome/chrome 8.99609 MB /usr/bin/NetworkManager 9.09766 MB /opt/google/chrome/chrome 9.12109 MB /usr/bin/bash 0.953125 MB server 9.55859 MB /opt/google/chrome/chrome 9.56641 MB htop 9.62109 MB /opt/google/chrome/chrome 9.89453 MB /opt/google/chrome/chrome 9.96484 MB /opt/google/chrome/chrome 9.99609 MB /opt/google/chrome/chrome
It's very weird, and it happens each time I load a partially offloaded model. Llama-3, Qwen-2.5,... after 2 or 3 consecutive loads and unloads the system becomes unresponsive and I get OOM kills everywhere and I have to reboot as I'm unable to find any way to locate where the memory is used and free it. Restarting the service doesn't work also, and, as far as I know. this is happening at least since a month. I just tried with the latest version of Ollama.
I will repeat the process from a console terminal without having active the desktop environment to see if I see other errors in the journal or kernel logs during the process when I come back from lunch.
What version of ollama were you using during the logs you added? What's the output of free -h
after ollama stop
?
Ok I repeat the test without graphical interface and I'll provide the requested info:
$ ollama --version ollama version is 0.3.14 (before running ollama) $ free -h total used free shared buffer/cache available Mem: 62Gi 28Gi 9.4Gi 2.9Gi 27Gi 33Gi Inter: 63Gi 0B 63Gi
(while running ollama) $ free -h total used free shared buffer/cache available Mem: 62Gi 58Gi 1,9Gi 166Mi 3,0Gi 4,0Gi Inter: 63Gi 7,2Gi 56Gi
(after stopping ollama) $ free -h total used free shared buffer/cache available Mem: 62Gi 36Gi 24Gi 124Mi 2,9Gi 26Gi Inter: 63Gi 5,5Gi 58Gi
... If I repeat the process now is much slower due to more swap usage and the outputs are:
(while running ollama) $ free -h total used free shared buffer/cache available Mem: 62Gi 62Gi 892Mi 404Mi 860Mi 671Mi Inter: 63Gi 29Gi 34Gi
(after stopping ollama) $ free -h total used free shared buffer/cache available Mem: 62Gi 57Gi 5,7Gi 2,9Gi 3,5Gi 5,7Gi Inter: 63Gi 0B 63Gi
... and now my system is completely unusable and I have to reboot....
Tell me if you need more info or tests.
Thanks!
I am having the same issue
@captaincurrie Can you provide some info? Server logs, which client, output of free -h
before & after, other observations?
What model are you using, and do you set any parameters like
num_ctx
ornum_gpu
?
I use dolphin-mixtral 8x7b. Running ollama as a service without any extra parameters:
/opt/ollama/ollama serve
@gestur1976 I have the same behaviour and absolutely same observations - need to reboot machine cos it become unusable, Linux starts killing processes by random.
@regularRandom Can you provide server logs?
I ran the following in a loop for 6 hours:
ollama run Meta-Llama-3.1:70B-Instruct.Q4_K_M 'Tell me a random fun fact about the Roman Empire'
with no system impact on two machines, RTX4070 (12G) + 96G RAM and RTX3080 (16G) + 64G RAM. Apart from the initial allocation of memory for the model, the server didn't grow any larger over the hours. However, this doesn't directly parallel the way @regularRandom is using the system, as I don't have Open WebUI configured on these machines. I will attempt to get that set up soon and re-run the attempt to replicate.
An interesting point in @gestur1976's post is the last free
:
(after stopping ollama)
$ free -h
total used free shared buffer/cache available
Mem: 62Gi 57Gi 5,7Gi 2,9Gi 3,5Gi 5,7Gi
Inter: 63Gi 0B 63Gi
where there is apparently no swap in use. Is this accurate?
What is the issue?
It seems Ollama has a memory leak or doesn't clean the memory after the prompt (execution). I have following stuff in the logs:
Each prompt adds 20-30GB of RAM and after 5-6 executions Ollama fails with "Cannot map memory with base addr ".
I have 22 core Xeon CPU, 128GB RAM and 2080Ti GPU with 11GB VRAM, so pretty enough for the regular use. OS is CentOS 9 Stream with 6.11.5 kernel.
Behaviour is the same for the Ollama built from source and downloaded from the GitHub.
OS
Linux
GPU
Nvidia
CPU
Intel
Ollama version
0.3.14-16