lmstudio-ai / lmstudio-bug-tracker

Bug tracking for the LM Studio desktop application
10 stars 3 forks source link

GPU Compatibility Issue with Quadro K2200 and CUDA #165

Open endaro opened 1 month ago

endaro commented 1 month ago

Description: I am experiencing issues using my GPU (Quadro K2200) with the latest software. Below is the log output when I try to load a model.

Steps Taken:

  1. Initially, I was using the latest version of CUDA, but it resulted in the same error.
  2. After investigating, I realized that my Quadro K2200 requires CUDA 10.x (specifically 10.2).

Request:

What are the exact requirements to use the GPU in this context? Could you please help me identify why the GPU isn't working correctly based on the logs provided below?

System Info

Operating System: Windows 10 Pro Version: 22H2 (Build 19045.5011) License: Microsoft Software License Processor: Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz Physical Cores: 4 Threads: 8 Clock Speed: 3.40 GHz (Current: 3.408 GHz) Architecture: 64-bit L2 Cache: 1024 KB L3 Cache: 8192 KB Virtualization Support: Enabled RAM: 16 GB CPU Type: Intel64 Family 6 Model 94 Stepping 3 Hardware Virtualization Extensions Support: Yes Second-Level Address Translation (SLAT): Enabled

Log Output: 2024-10-20 00:34:49 [DEBUG] AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | 2024-10-20 00:34:50 [DEBUG] ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: 2024-10-20 00:34:50 [DEBUG] Device 0: Quadro K2200, compute capability 5.0, VMM: yes 2024-10-20 00:34:50 [DEBUG] llama_model_loader: loaded meta data with 31 key-value pairs and 255 tensors from G:\USERS\es04080125.cache\lm-studio\models\lmstudio-community\Llama-3.2-3B-Instruct-GGUF\Llama-3.2-3B-Instruct-Q4_K_M.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Llama 3.2 3B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Llama-3.2 llama_model_loader: - kv 5: general.size_label str = 3B llama_model_loader: - kv 6: general.license str = llama3.2 llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... 2024-10-20 00:34:50 [DEBUG] llama_model_loader: - kv 9: llama.block_count u32 = 28 llama_model_loader: - kv 10: llama.context_length u32 = 131072 llama_model_loader: - kv 11: llama.embedding_length u32 = 3072 llama_model_loader: - kv 12: llama.feed_forward_length u32 = 8192 llama_model_loader: - kv 13: llama.attention.head_count u32 = 24 llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 17: llama.attention.key_length u32 = 128 llama_model_loader: - kv 18: llama.attention.value_length u32 = 128 llama_model_loader: - kv 19: general.file_type u32 = 15 llama_model_loader: - kv 20: llama.vocab_size u32 = 128256 llama_model_loader: - kv 21: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 22: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 23: tokenizer.ggml.pre str = llama-bpe 2024-10-20 00:34:50 [DEBUG] llama_model_loader: - kv 24: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... 2024-10-20 00:34:50 [DEBUG] llama_model_loader: - kv 25: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... 2024-10-20 00:34:51 [DEBUG] llama_model_loader: - kv 26: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 27: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - kv 30: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... llama_model_loader: - type f32: 58 tensors llama_model_loader: - type q4_K: 168 tensors llama_model_loader: - type q6_K: 29 tensors 2024-10-20 00:34:51 [DEBUG] llm_load_vocab: special tokens cache size = 256 2024-10-20 00:34:51 [DEBUG] llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 3072 llm_load_print_meta: n_layer = 28 llm_load_print_meta: n_head = 24 2024-10-20 00:34:51 [DEBUG] llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 3 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 8192 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 3B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 3.21 B llm_load_print_meta: model size = 1.87 GiB (5.01 BPW) llm_load_print_meta: general.name = Llama 3.2 3B Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 2024-10-20 00:34:54 [DEBUG] llm_load_tensors: ggml ctx size = 0.24 MiB 2024-10-20 00:35:02 [DEBUG] llm_load_tensors: offloading 28 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 29/29 layers to GPU llm_load_tensors: CPU buffer size = 308.23 MiB llm_load_tensors: CUDA0 buffer size = 1918.36 MiB 2024-10-20 00:35:28 [DEBUG] llama_new_context_with_model: n_ctx = 4096 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 2024-10-20 00:35:30 [DEBUG] llama_kv_cache_init: CUDA0 KV buffer size = 448.00 MiB llama_new_context_with_model: KV self size = 448.00 MiB, K (f16): 224.00 MiB, V (f16): 224.00 MiB 2024-10-20 00:35:31 [DEBUG] llama_new_context_with_model: CUDA_Host output buffer size = 0.49 MiB 2024-10-20 00:35:31 [DEBUG] llama_new_context_with_model: CUDA0 compute buffer size = 256.50 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 14.01 MiB llama_new_context_with_model: graph nodes = 902 llama_new_context_with_model: graph splits = 2 2024-10-20 00:35:31 [DEBUG] llama_init_from_gpt_params: warming up the model with an empty run - please wait ... (--no-warmup to disable) 2024-10-20 00:35:31 [DEBUG] ggml_cuda_compute_forward: RMS_NORM failed CUDA error: no kernel image is available for execution on the device current device: 0, in function ggml_cuda_compute_forward at C:\Projects\llmster-new\electron\vendor\llm-engine\llama.cpp\ggml\src\ggml-cuda.cu:2341 err 2024-10-20 00:35:31 [DEBUG] llama.cpp abort:70: CUDA error 2024-10-20 00:38:08 [DEBUG] [INFO] [PaniniRagEngine] Loading model into embedding engine... [WARNING] Batch size (512) is < context length (2048). Resetting batch size to context length to avoid unexpected behavior. 2024-10-20 00:38:08 [DEBUG] [INFO] [LlamaEmbeddingEngine] Loading model from path: G:\USERS\es04080125\AppData\Local\LM-Studio\app-0.3.4\resources\app.webpack\main\bundled-models\nomic-ai\nomic-embed-text-v1.5-GGUF\nomic-embed-text-v1.5.Q4_K_M.gguf 2024-10-20 00:38:08 [DEBUG] ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: Quadro K2200, compute capability 5.0, VMM: yes 2024-10-20 00:38:09 [DEBUG] llama_model_loader: loaded meta data with 23 key-value pairs and 112 tensors from G:\USERS\es04080125\AppData\Local\LM-Studio\app-0.3.4\resources\app.webpack\main\bundled-models\nomic-ai\nomic-embed-text-v1.5-GGUF\nomic-embed-text-v1.5.Q4_K_M.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = nomic-bert llama_model_loader: - kv 1: general.name str = nomic-embed-text-v1.5 llama_model_loader: - kv 2: nomic-bert.block_count u32 = 12 llama_model_loader: - kv 3: nomic-bert.context_length u32 = 2048 llama_model_loader: - kv 4: nomic-bert.embedding_length u32 = 768 llama_model_loader: - kv 5: nomic-bert.feed_forward_length u32 = 3072 llama_model_loader: - kv 6: nomic-bert.attention.head_count u32 = 12 llama_model_loader: - kv 7: nomic-bert.attention.layer_norm_epsilon f32 = 0.000000 llama_model_loader: - kv 8: general.file_type u32 = 15 llama_model_loader: - kv 9: nomic-bert.attention.causal bool = false 2024-10-20 00:38:09 [DEBUG] llama_model_loader: - kv 10: nomic-bert.pooling_type u32 = 1 llama_model_loader: - kv 11: nomic-bert.rope.freq_base f32 = 1000.000000 llama_model_loader: - kv 12: tokenizer.ggml.token_type_count u32 = 2 llama_model_loader: - kv 13: tokenizer.ggml.bos_token_id u32 = 101 llama_model_loader: - kv 14: tokenizer.ggml.eos_token_id u32 = 102 llama_model_loader: - kv 15: tokenizer.ggml.model str = bert 2024-10-20 00:38:09 [DEBUG] llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,30522] = ["[PAD]", "[unused0]", "[unused1]", "... 2024-10-20 00:38:09 [DEBUG] llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,30522] = [-1000.000000, -1000.000000, -1000.00... 2024-10-20 00:38:09 [DEBUG] llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,30522] = [3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 100 llama_model_loader: - kv 20: tokenizer.ggml.seperator_token_id u32 = 102 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 22: general.quantization_version u32 = 2 llama_model_loader: - type f32: 51 tensors llama_model_loader: - type q4_K: 43 tensors llama_model_loader: - type q5_K: 12 tensors llama_model_loader: - type q6_K: 6 tensors 2024-10-20 00:38:09 [DEBUG] llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 5 2024-10-20 00:38:09 [DEBUG] llm_load_vocab: token to piece cache size = 0.2032 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = nomic-bert llm_load_print_meta: vocab type = WPM llm_load_print_meta: n_vocab = 30522 llm_load_print_meta: n_merges = 0 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 2048 llm_load_print_meta: n_embd = 768 llm_load_print_meta: n_layer = 12 llm_load_print_meta: n_head = 12 llm_load_print_meta: n_head_kv = 12 llm_load_print_meta: n_rot = 64 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 64 llm_load_print_meta: n_embd_head_v = 64 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 768 llm_load_print_meta: n_embd_v_gqa = 768 2024-10-20 00:38:09 [DEBUG] llm_load_print_meta: f_norm_eps = 1.0e-12 llm_load_print_meta: f_norm_rms_eps = 0.0e+00 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 3072 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 0 llm_load_print_meta: pooling type = 1 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 2048 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 137M llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 136.73 M llm_load_print_meta: model size = 79.49 MiB (4.88 BPW) llm_load_print_meta: general.name = nomic-embed-text-v1.5 llm_load_print_meta: BOS token = 101 '[CLS]' llm_load_print_meta: EOS token = 102 '[SEP]' llm_load_print_meta: UNK token = 100 '[UNK]' llm_load_print_meta: SEP token = 102 '[SEP]' llm_load_print_meta: PAD token = 0 '[PAD]' llm_load_print_meta: CLS token = 101 '[CLS]' llm_load_print_meta: MASK token = 103 '[MASK]' llm_load_print_meta: LF token = 0 '[PAD]' llm_load_print_meta: EOG token = 102 '[SEP]' llm_load_print_meta: max token length = 21 2024-10-20 00:38:09 [DEBUG] llm_load_tensors: ggml ctx size = 0.10 MiB 2024-10-20 00:38:10 [DEBUG] llm_load_tensors: offloading 12 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 13/13 layers to GPU llm_load_tensors: CPU buffer size = 12.58 MiB llm_load_tensors: CUDA0 buffer size = 66.92 MiB 2024-10-20 00:38:10 [DEBUG] llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 2048 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000.0 llama_new_context_with_model: freq_scale = 1 2024-10-20 00:38:10 [DEBUG] llama_kv_cache_init: CUDA0 KV buffer size = 72.00 MiB llama_new_context_with_model: KV self size = 72.00 MiB, K (f16): 36.00 MiB, V (f16): 36.00 MiB llama_new_context_with_model: CPU output buffer size = 0.00 MiB 2024-10-20 00:38:10 [DEBUG] llama_new_context_with_model: CUDA0 compute buffer size = 260.01 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 38.02 MiB llama_new_context_with_model: graph nodes = 453 llama_new_context_with_model: graph splits = 2 2024-10-20 00:38:10 [DEBUG] llama_init_from_gpt_params: warming up the model with an empty run - please wait ... (--no-warmup to disable) 2024-10-20 00:38:10 [DEBUG] ggml_cuda_compute_forward: ADD failed CUDA error: no kernel image is available for execution on the device current device: 0, in function ggml_cuda_compute_forward at C:\Projects\llmster-new\electron\vendor\llm-engine\llama.cpp\ggml\src\ggml-cuda.cu:2341 err llama.cpp abort:70: CUDA error Select a model to configure it

YorkieDev commented 4 weeks ago

Hi @endaro, try going to the LM Runtimes tab and switch from cuda llama.cpp to vulkan llama.cpp, that should enable GPU offload again for you.