Open YUHSINCHENG1230 opened 1 week ago
Server logs will help in debugging. Likely problem is that the model doesn't fit and is spilling into system RAM and CPU inference.
0.4.1 was broken on cuda multi-GPU when passing an image - #7588 which has been fixed and will be in 0.4.2
Since you didn't mention crashing, I'm guessing you didn't pass an image. If you aren't using vision, try the non-multimodal https://ollama.com/library/llama3.2 models.
@rick-github Because I use K8s create session to use container in V100.(I have multi gpu) In command I use Server logs I can't get log ,how can i do? command outputNo journal files were found. -- No entries -- But what I am confused about is why the GPU overflows to the CPU when it is not fully occupied?
@dhiltgen I have using vision and i input image and the words. In [](url) the problem is use json format , but I didn't use and usually generating very slow,Same thing if I use a little tokens.
@rick-github I just have this in main command. (3.10) root@clearml-id-980eb02dfc484457b03275b81c6d1df0:~# ollama serve 2024/11/15 00:36:00 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:10080 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/mnt/shared/iris_t11216158/CECICSIC/model OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost: https://localhost: http://127.0.0.1 https://127.0.0.1 http://127.0.0.1: https://127.0.0.1: http://0.0.0.0 https://0.0.0.0 http://0.0.0.0: https://0.0.0.0: app:// file:// tauri:// vscode-webview://] OLLAMA_SCHED_SPREAD:true OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2024-11-15T00:36:00.212Z level=INFO source=images.go:755 msg="total blobs: 15" time=2024-11-15T00:36:00.213Z level=INFO source=images.go:762 msg="total unused blobs removed: 0" time=2024-11-15T00:36:00.214Z level=INFO source=routes.go:1240 msg="Listening on [::]:10080 (version 0.4.1)" time=2024-11-15T00:36:00.216Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v12 cpu cpu_avx cpu_avx2 cuda_v11]" time=2024-11-15T00:36:00.216Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs" time=2024-11-15T00:36:01.161Z level=INFO source=types.go:123 msg="inference compute" id=GPU-9b46d0e7-9d9f-b35b-5fd2-ad00b3823081 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-PCIE-16GB" total="15.8 GiB" available="15.5 GiB" time=2024-11-15T00:36:01.161Z level=INFO source=types.go:123 msg="inference compute" id=GPU-a3352cd7-6a6f-e366-0826-56e0bf9c4658 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-PCIE-16GB" total="15.8 GiB" available="15.5 GiB" time=2024-11-15T00:36:01.161Z level=INFO source=types.go:123 msg="inference compute" id=GPU-3262f129-847e-aa86-5c31-b1c2569daaa0 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-PCIE-16GB" total="15.8 GiB" available="15.5 GiB" time=2024-11-15T00:36:01.161Z level=INFO source=types.go:123 msg="inference compute" id=GPU-a90f42e3-0dba-6636-ef3c-ae6d76d21d61 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-PCIE-16GB" total="15.8 GiB" available="15.5 GiB" journalctl -u ollama --no-pager
time=2024-11-15T03:12:59.801Z level=WARN source=sched.go:137 msg="mllama doesn't support parallel requests yet" time=2024-11-15T03:13:00.665Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/mnt/shared/iris_t11216158/CECICSIC/model/blobs/sha256-da63a910e34997d50c9f21cc7f16996d1e76e1c128b13319edd68348f760ecc7 library=cuda parallel=1 required="14.9 GiB" time=2024-11-15T03:13:01.307Z level=INFO source=server.go:105 msg="system memory" total="503.3 GiB" free="494.2 GiB" free_swap="0 B" time=2024-11-15T03:13:01.312Z level=INFO source=memory.go:343 msg="offload to cuda" projector.weights="1.9 GiB" projector.graph="2.8 GiB" layers.requested=4 layers.model=101 layers.offload=4 layers.split=1,1,1,1 memory.available="[15.5 GiB 15.5 GiB 15.5 GiB 15.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="65.1 GiB" memory.required.partial="14.9 GiB" memory.required.kv="1.6 GiB" memory.required.allocations="[7.2 GiB 2.6 GiB 2.6 GiB 2.6 GiB]" memory.weights.total="49.3 GiB" memory.weights.repeating="48.5 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" time=2024-11-15T03:13:01.313Z level=INFO source=server.go:383 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v12/ollama_llama_server --model /mnt/shared/iris_t11216158/CECICSIC/model/blobs/sha256-da63a910e34997d50c9f21cc7f16996d1e76e1c128b13319edd68348f760ecc7 --ctx-size 2048 --batch-size 512 --n-gpu-layers 4 --mmproj /mnt/shared/iris_t11216158/CECICSIC/model/blobs/sha256-6b6c374d159e097509b33e9fda648c178c903959fc0c7dbfae487cc8d958093e --threads 52 --flash-attn --parallel 1 --tensor-split 1,1,1,1 --port 33471" time=2024-11-15T03:13:01.313Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2024-11-15T03:13:01.313Z level=INFO source=server.go:562 msg="waiting for llama runner to start responding" time=2024-11-15T03:13:01.314Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error" time=2024-11-15T03:13:01.367Z level=INFO source=runner.go:863 msg="starting go runner" time=2024-11-15T03:13:01.368Z level=INFO source=runner.go:864 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=52 time=2024-11-15T03:13:01.368Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:33471" llama_model_loader: loaded meta data with 27 key-value pairs and 984 tensors from /mnt/shared/iris_t11216158/CECICSIC/model/blobs/sha256-da63a910e34997d50c9f21cc7f16996d1e76e1c128b13319edd68348f760ecc7 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = mllama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Model llama_model_loader: - kv 3: general.size_label str = 88B llama_model_loader: - kv 4: mllama.block_count u32 = 100 llama_model_loader: - kv 5: mllama.context_length u32 = 131072 llama_model_loader: - kv 6: mllama.embedding_length u32 = 8192 llama_model_loader: - kv 7: mllama.feed_forward_length u32 = 28672 llama_model_loader: - kv 8: mllama.attention.head_count u32 = 64 llama_model_loader: - kv 9: mllama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 10: mllama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 11: mllama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 12: general.file_type u32 = 15 llama_model_loader: - kv 13: mllama.vocab_size u32 = 128256 llama_model_loader: - kv 14: mllama.rope.dimension_count u32 = 128 llama_model_loader: - kv 15: mllama.attention.cross_attention_layers arr[i32,20] = [3, 8, 13, 18, 23, 28, 33, 38, 43, 48... llama_model_loader: - kv 16: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 17: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 18: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 19: tokenizer.ggml.tokens arr[str,128257] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 20: tokenizer.ggml.token_type arr[i32,128257] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 21: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 23: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 24: tokenizer.ggml.padding_token_id u32 = 128004 llama_model_loader: - kv 25: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 26: general.quantization_version u32 = 2 llama_model_loader: - type f32: 282 tensors llama_model_loader: - type q4_K: 611 tensors llama_model_loader: - type q6_K: 91 tensors time=2024-11-15T03:13:01.570Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special tokens cache size = 257 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = mllama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_layer = 100 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 28672 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 87.67 B llm_load_print_meta: model size = 49.08 GiB (4.81 BPW) llm_load_print_meta: general.name = Model llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: PAD token = 128004 '<|finetune_right_pad_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llama_model_load: vocab mismatch 128256 !- 128257 ... ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 4 CUDA devices: Device 0: Tesla V100-PCIE-16GB, compute capability 7.0, VMM: yes Device 1: Tesla V100-PCIE-16GB, compute capability 7.0, VMM: yes Device 2: Tesla V100-PCIE-16GB, compute capability 7.0, VMM: yes Device 3: Tesla V100-PCIE-16GB, compute capability 7.0, VMM: yes llm_load_tensors: ggml ctx size = 2.25 MiB llm_load_tensors: offloading 4 repeating layers to GPU llm_load_tensors: offloaded 4/101 layers to GPU llm_load_tensors: CPU buffer size = 50261.92 MiB llm_load_tensors: CUDA0 buffer size = 518.88 MiB llm_load_tensors: CUDA1 buffer size = 518.88 MiB llm_load_tensors: CUDA2 buffer size = 516.81 MiB llm_load_tensors: CUDA3 buffer size = 518.88 MiB llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 1 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA_Host KV buffer size = 1566.59 MiB llama_kv_cache_init: CUDA0 KV buffer size = 8.00 MiB llama_kv_cache_init: CUDA1 KV buffer size = 8.00 MiB llama_kv_cache_init: CUDA2 KV buffer size = 50.03 MiB llama_kv_cache_init: CUDA3 KV buffer size = 8.00 MiB llama_new_context_with_model: KV self size = 1640.62 MiB, K (f16): 820.31 MiB, V (f16): 820.31 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.52 MiB llama_new_context_with_model: CUDA0 compute buffer size = 1088.45 MiB llama_new_context_with_model: CUDA1 compute buffer size = 144.00 MiB llama_new_context_with_model: CUDA3 compute buffer size = 162.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 20.01 MiB llama_new_context_with_model: graph nodes = 2247 llama_new_context_with_model: graph splits = 1007 mllama_model_load: model name: Llama-3.2-90B-Vision-Instruct mllama_model_load: description: vision encoder for Mllama mllama_model_load: GGUF version: 3 mllama_model_load: alignment: 32 mllama_model_load: n_tensors: 512 mllama_model_load: n_kv: 17 mllama_model_load: ftype: f16 mllama_model_load: mllama_model_load: vision using CUDA backend mllama_model_load: compute allocated memory: 2853.34 MB time=2024-11-15T03:13:07.353Z level=INFO source=server.go:601 msg="llama runner started in 6.04 seconds" llama_model_loader: loaded meta data with 27 key-value pairs and 984 tensors from /mnt/shared/iris_t11216158/CECICSIC/model/blobs/sha256-da63a910e34997d50c9f21cc7f16996d1e76e1c128b13319edd68348f760ecc7 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = mllama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Model llama_model_loader: - kv 3: general.size_label str = 88B llama_model_loader: - kv 4: mllama.block_count u32 = 100 llama_model_loader: - kv 5: mllama.context_length u32 = 131072 llama_model_loader: - kv 6: mllama.embedding_length u32 = 8192 llama_model_loader: - kv 7: mllama.feed_forward_length u32 = 28672 llama_model_loader: - kv 8: mllama.attention.head_count u32 = 64 llama_model_loader: - kv 9: mllama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 10: mllama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 11: mllama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 12: general.file_type u32 = 15 llama_model_loader: - kv 13: mllama.vocab_size u32 = 128256 llama_model_loader: - kv 14: mllama.rope.dimension_count u32 = 128 llama_model_loader: - kv 15: mllama.attention.cross_attention_layers arr[i32,20] = [3, 8, 13, 18, 23, 28, 33, 38, 43, 48... llama_model_loader: - kv 16: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 17: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 18: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 19: tokenizer.ggml.tokens arr[str,128257] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 20: tokenizer.ggml.token_type arr[i32,128257] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 21: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 23: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 24: tokenizer.ggml.padding_token_id u32 = 128004 llama_model_loader: - kv 25: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 26: general.quantization_version u32 = 2 llama_model_loader: - type f32: 282 tensors llama_model_loader: - type q4_K: 611 tensors llama_model_loader: - type q6_K: 91 tensors llm_load_vocab: special tokens cache size = 257 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = mllama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 1 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = all F32 llm_load_print_meta: model params = 87.67 B llm_load_print_meta: model size = 49.08 GiB (4.81 BPW) llm_load_print_meta: general.name = Model llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: PAD token = 128004 '<|finetune_right_pad_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llama_model_load: vocab mismatch 128256 !- 128257 ... llama_model_load: vocab only - skipping tensors [GIN] 2024/11/15 - 03:19:18 | 200 | 6m18s | 10.190.252.10 | POST "/api/chat"
time=2024-11-15T03:13:01.312Z level=INFO source=memory.go:343 msg="offload to cuda" projector.weights="1.9 GiB" projector.graph="2.8 GiB" layers.requested=4 layers.model=101 layers.offload=4 layers.split=1,1,1,1 memory.available="[15.5 GiB 15.5 GiB 15.5 GiB 15.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="65.1 GiB" memory.required.partial="14.9 GiB" memory.required.kv="1.6 GiB" memory.required.allocations="[7.2 GiB 2.6 GiB 2.6 GiB 2.6 GiB]" memory.weights.total="49.3 GiB" memory.weights.repeating="48.5 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
You have set num_gpu=4
somewhere, either in the API call, as a PARAMETER in the Modelfile, or as a /set
command in the CLI. This doesn't set the number of GPUs to use, it sets the number of layers to offload to the GPU. Unset it and more of the model will be loaded in to the GPUs.
What is the issue?
when I use llama3.2-vision:90b ,the model always response so slow. how can I do? And the GPU is not fully used. CPU used very high. it's run on V100*4 = 64G GPU Is anyone can help?
OS
Linux
GPU
Nvidia
CPU
Intel
Ollama version
0.4.1