ollama / ollama

Get up and running with Llama 3.2, Mistral, Gemma 2, and other large language models.
https://ollama.com
MIT License
95.86k stars 7.61k forks source link

Memory leaks after each prompt #7403

Open regularRandom opened 4 days ago

regularRandom commented 4 days ago

What is the issue?

It seems Ollama has a memory leak or doesn't clean the memory after the prompt (execution). I have following stuff in the logs:

[Mon Oct 28 13:03:00 2024] ------------[ cut here ]------------ [Mon Oct 28 13:03:00 2024] WARNING: CPU: 38 PID: 15739 at mm/page_alloc.c:4678 alloc_pages_noprof+0x2df/0x370 [Mon Oct 28 13:03:00 2024] Modules linked in: tls(E) xt_nat(E) veth(E) xt_conntrack(E) xt_MASQUERADE(E) bridge(E) stp(E) llc(E) nf_conntrack_netlink(E) xt_addrtype(E) ipt_REJECT(E) nft_compat(E) tun(E) nft_masq(E) overlay(E) nf_conntrack_netbios_ns(E) nf_conntrack_broadcast(E) nft_fib_inet(E) nft_fib_ipv4(E) nft_fib_ipv6(E) nft_fib(E) nft_reject_inet(E) nf_reject_ipv4(E) nf_reject_ipv6(E) nft_reject(E) sunrpc(E) nft_ct(E) nft_chain_nat(E) nf_nat(E) nf_conntrack(E) nf_defrag_ipv6(E) nf_defrag_ipv4(E) rfkill(E) ip_set(E) nf_tables(E) nfnetlink(E) qrtr(E) binfmt_misc(E) vfat(E) fat(E) nvidia_drm(POE) nvidia_modeset(POE) nvidia_uvm(POE) nvidia(POE) intel_rapl_msr(E) intel_rapl_common(E) intel_uncore_frequency(E) intel_uncore_frequency_common(E) sb_edac(E) x86_pkg_temp_thermal(E) intel_powerclamp(E) coretemp(E) kvm_intel(E) kvm(E) raid456(E) snd_hda_codec_realtek(E) async_raid6_recov(E) async_memcpy(E) async_pq(E) async_xor(E) snd_hda_codec_generic(E) async_tx(E) xor(E) snd_hda_codec_hdmi(E) snd_hda_scodec_component(E) snd_hda_intel(E) [Mon Oct 28 13:03:00 2024] ucsi_ccg(E) iTCO_wdt(E) snd_intel_dspcfg(E) intel_pmc_bxt(E) snd_intel_sdw_acpi(E) iTCO_vendor_support(E) snd_hda_codec(E) snd_hda_core(E) snd_hwdep(E) snd_seq(E) snd_seq_device(E) snd_pcm(E) rapl(E) raid6_pq(E) mei_me(E) snd_timer(E) drm_kms_helper(E) intel_cstate(E) snd(E) i2c_i801(E) intel_uncore(E) mei(E) soundcore(E) pcspkr(E) i2c_mux(E) video(E) i2c_nvidia_gpu(E) i2c_smbus(E) i2c_ccgx_ucsi(E) lpc_ich(E) joydev(E) drm(E) dm_mod(E) xfs(E) libcrc32c(E) sd_mod(E) sg(E) hid_logitech_hidpp(E) crct10dif_pclmul(E) ahci(E) crc32_pclmul(E) libahci(E) crc32c_intel(E) mxm_wmi(E) ixgbe(E) polyval_clmulni(E) polyval_generic(E) libata(E) nvme(E) nvme_core(E) ghash_clmulni_intel(E) mdio(E) dca(E) wmi(E) hid_logitech_dj(E) i2c_dev(E) fuse(E) [Mon Oct 28 13:03:00 2024] CPU: 38 UID: 959 PID: 15739 Comm: ollama_llama_se Kdump: loaded Tainted: P OE 6.11.5-1.el9.elrepo.x86_64 #1 [Mon Oct 28 13:03:00 2024] Tainted: [P]=PROPRIETARY_MODULE, [O]=OOT_MODULE, [E]=UNSIGNED_MODULE [Mon Oct 28 13:03:00 2024] Hardware name: MSI MS-7885/X99A RAIDER (MS-7885), BIOS P.71 06/13/2019 [Mon Oct 28 13:03:00 2024] RIP: 0010:alloc_pages_noprof+0x2df/0x370 [Mon Oct 28 13:03:00 2024] Code: e8 76 4c fa ff e9 7b fe ff ff 83 fe 0a 0f 86 a5 fd ff ff 45 31 f6 80 3d b0 9d f7 01 00 0f 85 ca fe ff ff c6 05 a3 9d f7 01 01 <0f> 0b e9 bc fe ff ff 45 31 f6 e9 b4 fe ff ff f7 c2 00 00 80 00 75 [Mon Oct 28 13:03:00 2024] RSP: 0018:ffffb42e4f5fbad0 EFLAGS: 00010246 [Mon Oct 28 13:03:00 2024] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000 [Mon Oct 28 13:03:00 2024] RDX: 0000000000000000 RSI: 000000000000000e RDI: 0000000000000000 [Mon Oct 28 13:03:00 2024] RBP: 0000000000040cc0 R08: 0000000000000000 R09: 0000000000000009 [Mon Oct 28 13:03:00 2024] R10: ffffb42e4f5fbbf8 R11: 0000000000000023 R12: 000000000000000e [Mon Oct 28 13:03:00 2024] R13: 000000000000000e R14: 0000000000000000 R15: 0000000000000cc0 [Mon Oct 28 13:03:00 2024] FS: 00007f920a152000(0000) GS:ffff9f365ed00000(0000) knlGS:0000000000000000 [Mon Oct 28 13:03:00 2024] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [Mon Oct 28 13:03:00 2024] CR2: 00007f91df908ba0 CR3: 00000002c9c3c004 CR4: 00000000003706f0 [Mon Oct 28 13:03:00 2024] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [Mon Oct 28 13:03:00 2024] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [Mon Oct 28 13:03:00 2024] Call Trace: [Mon Oct 28 13:03:00 2024] [Mon Oct 28 13:03:00 2024] ? warn+0x7f/0x120 [Mon Oct 28 13:03:00 2024] ? __alloc_pages_noprof+0x2df/0x370 [Mon Oct 28 13:03:00 2024] ? report_bug+0x1c3/0x1d0 [Mon Oct 28 13:03:00 2024] ? handle_bug+0x42/0x70 [Mon Oct 28 13:03:00 2024] ? exc_invalid_op+0x14/0x70 [Mon Oct 28 13:03:00 2024] ? asm_exc_invalid_op+0x16/0x20 [Mon Oct 28 13:03:00 2024] ? alloc_pages_noprof+0x2df/0x370 [Mon Oct 28 13:03:00 2024] _kmalloc_large_node+0x74/0x120 [Mon Oct 28 13:03:00 2024] ? check_and_migrate_movable_pages+0x2a/0xb0 [Mon Oct 28 13:03:00 2024] __kmalloc_large_node_noprof+0x17/0xa0 [Mon Oct 28 13:03:00 2024] ? check_and_migrate_movable_pages+0x2a/0xb0 [Mon Oct 28 13:03:00 2024] kmalloc_noprof+0x2a4/0x3b0 [Mon Oct 28 13:03:00 2024] ? check_and_migrate_movable_pages+0x2a/0xb0 [Mon Oct 28 13:03:00 2024] check_and_migrate_movable_pages+0x2a/0xb0 [Mon Oct 28 13:03:00 2024] gup_longterm_locked+0x5de/0x8e0 [Mon Oct 28 13:03:00 2024] ? vmalloc_node_range_noprof+0x10/0x220 [Mon Oct 28 13:03:00 2024] pin_user_pages+0x78/0xa0 [Mon Oct 28 13:03:00 2024] os_lock_user_pages+0xaa/0x1a0 [nvidia] [Mon Oct 28 13:03:00 2024] _nv000676rm+0x67/0x110 [nvidia] [Mon Oct 28 13:03:00 2024] ? _nv000745rm+0xbe6/0xe00 [nvidia] [Mon Oct 28 13:03:00 2024] ? rm_ioctl+0x7f/0x400 [nvidia] [Mon Oct 28 13:03:00 2024] ? nvidia_ioctl.isra.0+0x53e/0x7f0 [nvidia] [Mon Oct 28 13:03:00 2024] ? vm_mmap_pgoff+0x124/0x1b0 [Mon Oct 28 13:03:00 2024] ? nvidia_unlocked_ioctl+0x21/0x30 [nvidia] [Mon Oct 28 13:03:00 2024] ? __x64_sys_ioctl+0x8d/0xc0 [Mon Oct 28 13:03:00 2024] ? do_syscall_64+0x60/0x180 [Mon Oct 28 13:03:00 2024] ? entry_SYSCALL_64_after_hwframe+0x76/0x7e [Mon Oct 28 13:03:00 2024] [Mon Oct 28 13:03:00 2024] ---[ end trace 0000000000000000 ]--- [Mon Oct 28 13:03:00 2024] Cannot map memory with base addr 0x7f86ae000000 and size of 0x43edf1 pages

Each prompt adds 20-30GB of RAM and after 5-6 executions Ollama fails with "Cannot map memory with base addr ".

I have 22 core Xeon CPU, 128GB RAM and 2080Ti GPU with 11GB VRAM, so pretty enough for the regular use. OS is CentOS 9 Stream with 6.11.5 kernel.

Behaviour is the same for the Ollama built from source and downloaded from the GitHub.

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.3.14-16

rick-github commented 4 days ago
Each prompt adds 20-30GB of RAM

What sort of prompt are you sending? What client are you using?

dhiltgen commented 3 days ago

@regularRandom could you also try out the new 0.4.0 RC release to see if that changes behavior at all? I'm not sure yet which layer this memory issue is taking place at, but prompt processing and caching has been reworked in the 0.4.0 release.

https://github.com/ollama/ollama/releases

regularRandom commented 3 days ago
Each prompt adds 20-30GB of RAM

What sort of prompt are you sending? What client are you using?

Well, homework questions like "How did the fear of communism come to affect American foreign policy in regard to "containment" and what were there critiques of containment?" - all well prepared. Client is open-webui, docker.

regularRandom commented 3 days ago

This process remains in memory after the each execution:

135376 ollama 20 0 65.5g 17.5g 371640 S 0.0 14.0 31:26.44 ollama_llama_se

and new entry in dmesg:

[Mon Oct 28 21:16:03 2024] Cannot map memory with base addr 0x7f37c8000000 and size of 0x43edf1 pages

rick-github commented 3 days ago

What model are you using, and do you set any parameters like num_ctx or num_gpu?

rick-github commented 3 days ago

Server logs may also help in debugging.

gestur1976 commented 3 days ago

I'm also having the same issue since I installed ollama about a month ago, but it seems it only happens for partially offloaded models and not when the models are fully loaded in VRAM.

My system: Arch Linux Linux 6.11.4-arch2-1 CPU: AMD Ryzen 7 5700X 8-Core Processor 16 virtual cores. 64Gb DDR4 RAM 2 x NVIDIA RTX 3060 12Gb VRAM -> total 24GB VRAM.

After booting and before loading any model my RAM usage is about 12G (there's a web server, PHP process managers, a MariaDB Server and some ZFS pools using some Gb of RAM as ARC cache).

For example, as I said before, if I use models that fit entirely in my GPUS I can make many requests as I want without any aparent memory loss. VRAM is allocated and freed normally if I run nvidia-smi during requests and when the model unloads.

But for example, if I load Meta-Llama-3.1-70B-Instruct.Q4_K_M (from [https://huggingface.co/MaziyarPanahi/Meta-Llama-3.1-70B-Instruct-GGUF]https://huggingface.co/MaziyarPanahi/Meta-Llama-3.1-70B-Instruct-GGUF), imported to ollama using the following modelfile text: FROM ./MaziyarPanahi/Meta-Llama-3.1-70B-Instruct-GGUF/Meta-Llama-3.1-70B-Instruct.Q4_K_M.gguf

I ask for example "Tell me a random fun fact about the Roman Empire", and then i run ollama stop

oct 29 13:35:48 morty ollama[43973]: time=2024-10-29T13:35:48.955+01:00 level=INFO source=server.go:105 msg="systemmemory" total="62.7 GiB" free="32.5 GiB" free_swap="57.6 GiB" oct 29 13:35:49 morty ollama[43973]: time=2024-10-29T13:35:48.956+01:00 level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=36 layers.split=19,17 memory.available="[11.5 GiB 10.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="45.0 GiB" memory.required.partial="21.6 GiB" memory.required.kv="640.0 MiB" memory.required.allocations="[11.2 GiB 10.3 GiB]" memory.weights.total="38.9 GiB" memory.weights.repeating="38.1 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" oct 29 13:35:49 morty ollama[43973]: time=2024-10-29T13:35:48.956+01:00 level=INFO source=server.go:388 msg="starting llama server" cmd="/tmp/ollama346011471/runners/cuda_v12/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-3f16ab17da4521fe3ed7c5d7beed960d3fe7b5b64421ee9650aa53d6b649ccab --ctx-size 2048 --batch-size 512 --embedding --n-gpu-layers 36 --threads 8 --no-mmap --parallel 1 --tensor-split 19,17 --port 45727" oct 29 13:35:49 morty ollama[43973]: time=2024-10-29T13:35:48.956+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 oct 29 13:35:49 morty ollama[43973]: time=2024-10-29T13:35:48.956+01:00 level=INFO source=server.go:587 msg="waiting for llama runner to start responding" oct 29 13:35:49 morty ollama[43973]: time=2024-10-29T13:35:48.956+01:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error" oct 29 13:35:49 morty ollama[87850]: INFO [main] starting c++ runner | tid="131942650417152" timestamp=1730205348 oct 29 13:35:49 morty ollama[87850]: INFO [main] build info | build=10 commit="3a8c75e" tid="131942650417152" timestamp=1730205348 oct 29 13:35:49 morty ollama[87850]: INFO [main] system info | n_threads=8 n_threads_batch=8 system_info="AVX = 1 |AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 |VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="131942650417152" timestamp=1730205348 total_threads=16 oct 29 13:35:49 morty ollama[87850]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="45727" tid="131942650417152" timestamp=1730205348 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: loaded meta data with 33 key-value pairs and 723 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-3f16ab17da4521fe3ed7c5d7beed960d3fe7b5b64421ee9650aa53d6b649ccab (version GGUF V3 (latest)) oct 29 13:35:49 morty ollama[43973]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 0: general.architecture str = llama oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 1: general.type str = model oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 2: general.name str = Models Meta Llama Meta Llama 3.1 70B ... oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 3: general.finetune str = Instruct oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 4: general.basename str = models-meta-llama-Meta-Llama-3.1 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 5: general.size_label str = 70B oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 6: general.license str = llama3.1 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 9: llama.block_count u32 = 80 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 10: llama.context_length u32 = 131072 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 11: llama.embedding_length u32 = 8192 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 12: llama.feed_forward_length u32 = 28672 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 13: llama.attention.head_count u32 = 64 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 17: general.file_type u32 = 15 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 18: llama.vocab_size u32 = 128256 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 21: tokenizer.ggml.pre str = smaug-bpe oct 29 13:35:49 morty ollama[43973]: time=2024-10-29T13:35:49.207+01:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 27: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 28: general.quantization_version u32 = 2 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 29: quantize.imatrix.file str = ./Meta-Llama-3.1-70B-Instruct-GGUF_im... oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 30: quantize.imatrix.dataset str = group_40.txt oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 31: quantize.imatrix.entries_count i32 = 560 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - kv 32: quantize.imatrix.chunks_count i32 = 68 oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - type f32: 161 tensors oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - type q4_K: 441 tensors oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - type q5_K: 40 tensors oct 29 13:35:49 morty ollama[43973]: llama_model_loader: - type q6_K: 81 tensors oct 29 13:35:49 morty ollama[43973]: llm_load_vocab: special tokens cache size = 256 oct 29 13:35:49 morty ollama[43973]: llm_load_vocab: token to piece cache size = 0.7999 MB oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: format = GGUF V3 (latest) oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: arch = llama oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: vocab type = BPE oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_vocab = 128256 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_merges = 280147 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: vocab_only = 0 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_ctx_train = 131072 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_embd = 8192 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_layer = 80 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_head = 64 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_head_kv = 8 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_rot = 128 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_swa = 0 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_embd_head_k = 128 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_embd_head_v = 128 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_gqa = 8 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_embd_k_gqa = 1024 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_embd_v_gqa = 1024 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: f_norm_eps = 0.0e+00 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: f_logit_scale = 0.0e+00 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_ff = 28672 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_expert = 0 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_expert_used = 0 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: causal attn = 1 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: pooling type = 0 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: rope type = 0 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: rope scaling = linear oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: freq_base_train = 500000.0 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: freq_scale_train = 1 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: n_ctx_orig_yarn = 131072 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: rope_finetuned = unknown oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: ssm_d_conv = 0 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: ssm_d_inner = 0 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: ssm_d_state = 0 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: ssm_dt_rank = 0 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: ssm_dt_b_c_rms = 0 oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: model type = 70B oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: model ftype = Q4_K - Medium oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: model params = 70.55 B oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: general.name = Models Meta Llama Meta Llama 3.1 70B Instruct oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: EOS token = 128009 '<|eot_id|>' oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: LF token = 128 'Ä' oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: EOT token = 128009 '<|eot_id|>' oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: EOM token = 128008 '<|eom_id|>' oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: EOG token = 128008 '<|eom_id|>' oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: EOG token = 128009 '<|eot_id|>' oct 29 13:35:49 morty ollama[43973]: llm_load_print_meta: max token length = 256 oct 29 13:35:49 morty ollama[43973]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no oct 29 13:35:49 morty ollama[43973]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no oct 29 13:35:49 morty ollama[43973]: ggml_cuda_init: found 2 CUDA devices: oct 29 13:35:49 morty ollama[43973]: Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes oct 29 13:35:49 morty ollama[43973]: Device 1: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes oct 29 13:35:49 morty ollama[43973]: llm_load_tensors: ggml ctx size = 1.01 MiB oct 29 13:35:50 morty ollama[43973]: time=2024-10-29T13:35:50.661+01:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding" oct 29 13:35:57 morty ollama[43973]: time=2024-10-29T13:35:57.224+01:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" oct 29 13:35:57 morty ollama[43973]: ggml_cuda_host_malloc: failed to allocate 22863.42 MiB of pinned memory: invalid argument oct 29 13:35:57 morty ollama[43973]: llm_load_tensors: offloading 36 repeating layers to GPU oct 29 13:35:57 morty ollama[43973]: llm_load_tensors: offloaded 36/81 layers to GPU oct 29 13:35:57 morty ollama[43973]: llm_load_tensors: CPU buffer size = 22863.42 MiB oct 29 13:35:57 morty ollama[43973]: llm_load_tensors: CUDA0 buffer size = 9094.06 MiB oct 29 13:35:57 morty ollama[43973]: llm_load_tensors: CUDA1 buffer size = 8585.62 MiB oct 29 13:40:43 morty ollama[43973]: llama_new_context_with_model: n_ctx = 2048 oct 29 13:40:43 morty ollama[43973]: llama_new_context_with_model: n_batch = 512 oct 29 13:40:43 morty ollama[43973]: llama_new_context_with_model: n_ubatch = 512 oct 29 13:40:43 morty ollama[43973]: llama_new_context_with_model: flash_attn = 0 oct 29 13:40:43 morty ollama[43973]: llama_new_context_with_model: freq_base = 500000.0 oct 29 13:40:43 morty ollama[43973]: llama_new_context_with_model: freq_scale = 1 oct 29 13:40:43 morty ollama[43973]: llama_kv_cache_init: CUDA_Host KV buffer size = 352.00 MiB oct 29 13:40:43 morty ollama[43973]: llama_kv_cache_init: CUDA0 KV buffer size = 152.00 MiB oct 29 13:40:43 morty ollama[43973]: llama_kv_cache_init: CUDA1 KV buffer size = 136.00 MiB oct 29 13:40:43 morty ollama[43973]: llama_new_context_with_model: KV self size = 640.00 MiB, K (f16): 320.00 MiB, V (f16): 320.00 MiB oct 29 13:40:43 morty ollama[43973]: llama_new_context_with_model: CUDA_Host output buffer size = 0.52 MiB oct 29 13:40:43 morty ollama[43973]: llama_new_context_with_model: CUDA0 compute buffer size = 1088.45 MiB oct 29 13:40:43 morty ollama[43973]: llama_new_context_with_model: CUDA1 compute buffer size = 324.00 MiB oct 29 13:40:43 morty ollama[43973]: llama_new_context_with_model: CUDA_Host compute buffer size = 20.01 MiB oct 29 13:40:43 morty ollama[43973]: llama_new_context_with_model: graph nodes = 2566 oct 29 13:40:43 morty ollama[43973]: llama_new_context_with_model: graph splits = 489 INFO [main] model loaded | tid="131942650417152" timestamp=1730205682 oct 29 13:41:22 morty ollama[43973]: time=2024-10-29T13:41:22.396+01:00 level=INFO source=server.go:626 msg="llama runner started in 333.38 seconds"

Then, when the output is being generated, my RAM usage raises to 61Gb of RAM amd 10Gb of swap, and when it finalizes, the used RAM only goes down to 40Gb instead of coming back to less than 20Gb. But the most intriguing is that I can't see what process is using that RAM. For example 'htop' sorted by RAM usage:

image

Output of ps aux | awk '{print $6/1024 " MB\t\t" $11}' | sort -n ... 5.82031 MB systemd-userwork: 5.86328 MB systemd-userwork: 5.99609 MB /usr/lib/flatpak-session-helper 6.10547 MB xdg-dbus-proxy 6.23047 MB /opt/google/chrome/chrome 63.9414 MB /opt/google/chrome/chrome 64.4922 MB /opt/google/chrome/chrome 6.82422 MB /usr/bin/mariadbd 69.4648 MB /usr/bin/konsole 7.06641 MB /usr/lib/evolution-data-server/evolution-alarm-notify 7.08203 MB /usr/lib/udisks2/udisksd 7.19531 MB /opt/google/chrome/chrome 7.20703 MB /opt/google/chrome/chrome 7.27344 MB journalctl 7.52734 MB awk 7.55859 MB /opt/spotify/spotify 7.58203 MB sort 7.69141 MB /opt/google/chrome/chrome 7.72266 MB /usr/lib/xdg-desktop-portal 7.85156 MB /opt/google/chrome/chrome 7.94922 MB /usr/lib/gvfs-udisks2-volume-monitor 8.05078 MB ps 8.05859 MB /usr/bin/fcitx5 0.820312 MB /usr/bin/vmnet-netifup 8.43359 MB /usr/bin/sddm 8.44141 MB /opt/spotify/spotify 8.46484 MB /opt/google/chrome/chrome 8.67578 MB /usr/bin/redis-server 8.73047 MB /opt/spotify/spotify 8.82422 MB /usr/lib/systemd/systemd 0.886719 MB /usr/bin/vmnet-netifup 88.9023 MB /opt/google/chrome/chrome 8.99609 MB /usr/bin/NetworkManager 9.09766 MB /opt/google/chrome/chrome 9.12109 MB /usr/bin/bash 0.953125 MB server 9.55859 MB /opt/google/chrome/chrome 9.56641 MB htop 9.62109 MB /opt/google/chrome/chrome 9.89453 MB /opt/google/chrome/chrome 9.96484 MB /opt/google/chrome/chrome 9.99609 MB /opt/google/chrome/chrome

It's very weird, and it happens each time I load a partially offloaded model. Llama-3, Qwen-2.5,... after 2 or 3 consecutive loads and unloads the system becomes unresponsive and I get OOM kills everywhere and I have to reboot as I'm unable to find any way to locate where the memory is used and free it. Restarting the service doesn't work also, and, as far as I know. this is happening at least since a month. I just tried with the latest version of Ollama.

I will repeat the process from a console terminal without having active the desktop environment to see if I see other errors in the journal or kernel logs during the process when I come back from lunch.

rick-github commented 3 days ago

What version of ollama were you using during the logs you added? What's the output of free -h after ollama stop?

gestur1976 commented 3 days ago

Ok I repeat the test without graphical interface and I'll provide the requested info:

$ ollama --version ollama version is 0.3.14 (before running ollama) $ free -h total used free shared buffer/cache available Mem: 62Gi 28Gi 9.4Gi 2.9Gi 27Gi 33Gi Inter: 63Gi 0B 63Gi

(while running ollama) $ free -h total used free shared buffer/cache available Mem: 62Gi 58Gi 1,9Gi 166Mi 3,0Gi 4,0Gi Inter: 63Gi 7,2Gi 56Gi

(after stopping ollama) $ free -h total used free shared buffer/cache available Mem: 62Gi 36Gi 24Gi 124Mi 2,9Gi 26Gi Inter: 63Gi 5,5Gi 58Gi

... If I repeat the process now is much slower due to more swap usage and the outputs are:

(while running ollama) $ free -h total used free shared buffer/cache available Mem: 62Gi 62Gi 892Mi 404Mi 860Mi 671Mi Inter: 63Gi 29Gi 34Gi

(after stopping ollama) $ free -h total used free shared buffer/cache available Mem: 62Gi 57Gi 5,7Gi 2,9Gi 3,5Gi 5,7Gi Inter: 63Gi 0B 63Gi

... and now my system is completely unusable and I have to reboot....

Tell me if you need more info or tests.

Thanks!

captaincurrie commented 2 days ago

I am having the same issue

rick-github commented 2 days ago

@captaincurrie Can you provide some info? Server logs, which client, output of free -h before & after, other observations?

regularRandom commented 1 day ago

What model are you using, and do you set any parameters like num_ctx or num_gpu?

I use dolphin-mixtral 8x7b. Running ollama as a service without any extra parameters:

/opt/ollama/ollama serve

@gestur1976 I have the same behaviour and absolutely same observations - need to reboot machine cos it become unusable, Linux starts killing processes by random.

rick-github commented 1 day ago

@regularRandom Can you provide server logs?

I ran the following in a loop for 6 hours:

ollama run Meta-Llama-3.1:70B-Instruct.Q4_K_M 'Tell me a random fun fact about the Roman Empire'

with no system impact on two machines, RTX4070 (12G) + 96G RAM and RTX3080 (16G) + 64G RAM. Apart from the initial allocation of memory for the model, the server didn't grow any larger over the hours. However, this doesn't directly parallel the way @regularRandom is using the system, as I don't have Open WebUI configured on these machines. I will attempt to get that set up soon and re-run the attempt to replicate.

An interesting point in @gestur1976's post is the last free:

(after stopping ollama)
$ free -h
          total  used   free  shared buffer/cache available
Mem:       62Gi  57Gi  5,7Gi   2,9Gi        3,5Gi     5,7Gi
Inter:     63Gi    0B   63Gi

where there is apparently no swap in use. Is this accurate?