when i make the context size larger one CUDA Device disappears in the startup log and is not used anymore.
i have 2 Nvidia Cards. 1 RTX4090 with 24 GB and 1 RTX3060 with 12 GB and 192 GB Main Memory.
i try to figure out how context size changes the memory requirements. when i try to make it larger again, also the second CUDA Device disappears. If i load the model with the standard context size both are used.
It seems this is a problem of llama.cpp because i have this problem also with ollama.
Loading model: C:\Users\chris\.cache\lm-studio\models\bartowski\Mistral-Large-Instruct-2407-GGUF\Mistral-Large-Instruct-2407-IQ2_XXS.gguf
The reported GGUF Arch is: llama
Arch Category: 0
---
Identified as GGUF model: (ver 6)
Attempting to Load...
---
Using automatic RoPE scaling for GGUF. If the model has custom RoPE settings, they'll be used directly instead!
It means that the RoPE values written above will be replaced by the RoPE values indicated after loading.
System Info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 |
llama_model_loader: loaded meta data with 39 key-value pairs and 795 tensors from C:\Users\chris\.cache\lm-studio\models\bartowsÎzÁ?0Ìllm_load_vocab: special tokens cache size = 771
llm_load_vocab: token to piece cache size = 0.1732 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32768
llm_load_print_meta: n_merges = 0
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 12288
llm_load_print_meta: n_layer = 88
llm_load_print_meta: n_head = 96
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 12
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 28672
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = F16
llm_load_print_meta: model params = 122.61 B
llm_load_print_meta: model size = 30.20 GiB (2.12 BPW)
llm_load_print_meta: general.name = Mistral Large Instruct 2407
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token = 781 '<0x0A>'
llm_load_print_meta: max token length = 48
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
llm_load_tensors: ggml ctx size = 0.87 MiB
llm_load_tensors: offloading 10 repeating layers to GPU
llm_load_tensors: offloaded 10/89 layers to GPU
llm_load_tensors: CPU buffer size = 30927.42 MiB
llm_load_tensors: CUDA0 buffer size = 3440.62 MiB
....................................................................................................
Automatic RoPE Scaling: Using model internal value.
llama_new_context_with_model: n_ctx = 65632
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA_Host KV buffer size = 19997.25 MiB
llama_kv_cache_init: CUDA0 KV buffer size = 2563.75 MiB
llama_new_context_with_model: KV self size = 22561.00 MiB, K (f16): 11280.50 MiB, V (f16): 11280.50 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 0.12 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 12775.69 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 152.19 MiB
llama_new_context_with_model: graph nodes = 2822
llama_new_context_with_model: graph splits = 862
Load Text Model OK: True
Embedded KoboldAI Lite loaded.
Embedded API docs loaded.
Starting Kobold API on port 5001 at http://localhost:5001/api/
Starting OpenAI Compatible API on port 5001 at http://localhost:5001/v1/
======
Please connect to custom endpoint at http://localhost:5001
when i make the context size larger one CUDA Device disappears in the startup log and is not used anymore.
i have 2 Nvidia Cards. 1 RTX4090 with 24 GB and 1 RTX3060 with 12 GB and 192 GB Main Memory. i try to figure out how context size changes the memory requirements. when i try to make it larger again, also the second CUDA Device disappears. If i load the model with the standard context size both are used. It seems this is a problem of llama.cpp because i have this problem also with ollama.