Closed DerekJuba-NIST closed 3 days ago
And here is a crash with the latest commit 925c309. I notice now that the error is a bit different.
$ /opt/llama.cpp-925c309/bin/llama-server --host localhost --port 18443 --n-gpu-layers 0 --ctx-size 8192 --model meta-llama-3-70b-instruct-q4_k.gguf
INFO [ main] build info | tid="140022984699904" timestamp=1719339935 build=3225 commit="925c3095"
INFO [ main] system info | tid="140022984699904" timestamp=1719339935 n_threads=32 n_threads_batch=-1 total_threads=64 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | "
llama_model_loader: loaded meta data with 22 key-value pairs and 723 tensors from meta-llama-3-70b-instruct-q4_k.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = Meta-Llama-3-70B-Instruct
llama_model_loader: - kv 2: llama.block_count u32 = 80
llama_model_loader: - kv 3: llama.context_length u32 = 8192
llama_model_loader: - kv 4: llama.embedding_length u32 = 8192
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 6: llama.attention.head_count u32 = 64
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 15
llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 20: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv 21: general.quantization_version u32 = 2
llama_model_loader: - type f32: 161 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.8000 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 28672
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 70B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 39.59 GiB (4.82 BPW)
llm_load_print_meta: general.name = Meta-Llama-3-70B-Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
Device 0: NVIDIA TITAN RTX, compute capability 7.5, VMM: yes
Device 1: NVIDIA TITAN RTX, compute capability 7.5, VMM: yes
llm_load_tensors: ggml ctx size = 0.34 MiB
llm_load_tensors: offloading 0 repeating layers to GPU
llm_load_tensors: offloaded 0/81 layers to GPU
llm_load_tensors: CPU buffer size = 40543.11 MiB
...................................................................................................
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA_Host KV buffer size = 2560.00 MiB
llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 0.98 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 1108.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 32.01 MiB
llama_new_context_with_model: graph nodes = 2566
llama_new_context_with_model: graph splits = 884
INFO [ init] initializing slots | tid="140022984699904" timestamp=1719339940 n_slots=1
INFO [ init] new slot | tid="140022984699904" timestamp=1719339940 id_slot=0 n_ctx_slot=8192
INFO [ main] model loaded | tid="140022984699904" timestamp=1719339940
INFO [ main] chat template | tid="140022984699904" timestamp=1719339940 chat_example="<|start_header_id|>system<|end_header_id|>\n\nYou are a helpful assistant<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nHello<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\nHi there<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nHow are you?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" built_in=true
INFO [ main] HTTP server listening | tid="140022984699904" timestamp=1719339940 n_threads_http="63" port="18443" hostname="localhost"
INFO [ update_slots] all slots are idle | tid="140022984699904" timestamp=1719339940
INFO [ launch_slot_with_task] slot is processing task | tid="140022984699904" timestamp=1719339944 id_slot=0 id_task=0
INFO [ update_slots] kv cache rm [p0, end) | tid="140022984699904" timestamp=1719339944 id_slot=0 id_task=0 p0=0
ggml_backend_cuda_graph_compute: disabling CUDA graphs due to GPU architecture
ggml_backend_cuda_graph_compute: disabling CUDA graphs due to GPU architecture
ggml_backend_cuda_graph_compute: disabling CUDA graphs due to GPU architecture
ggml_backend_cuda_graph_compute: disabling CUDA graphs due to GPU architecture
ggml_backend_cuda_graph_compute: disabling CUDA graphs due to GPU architecture
CUDA error: misaligned address
current device: 0, in function ggml_backend_cuda_synchronize at /XXX/llama.cpp/ggml-cuda.cu:2388
cudaStreamSynchronize(cuda_ctx->stream())
GGML_ASSERT: /XXX/llama.cpp/ggml-cuda.cu:100: !"CUDA error"
Aborted
And one more log, this time on the latest commit with all layers on GPU.
$ /opt/llama.cpp-925c309/bin/llama-server --host localhost --port 18443 --n-gpu-layers 81 --ctx-size 8192 --model meta-llama-3-70b-instruct-q4_k.gguf
INFO [ main] build info | tid="140529722863616" timestamp=1719340145 build=3225 commit="925c3095"
INFO [ main] system info | tid="140529722863616" timestamp=1719340145 n_threads=32 n_threads_batch=-1 total_threads=64 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | "
llama_model_loader: loaded meta data with 22 key-value pairs and 723 tensors from meta-llama-3-70b-instruct-q4_k.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = Meta-Llama-3-70B-Instruct
llama_model_loader: - kv 2: llama.block_count u32 = 80
llama_model_loader: - kv 3: llama.context_length u32 = 8192
llama_model_loader: - kv 4: llama.embedding_length u32 = 8192
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 6: llama.attention.head_count u32 = 64
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 15
llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 20: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv 21: general.quantization_version u32 = 2
llama_model_loader: - type f32: 161 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.8000 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 28672
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 70B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 39.59 GiB (4.82 BPW)
llm_load_print_meta: general.name = Meta-Llama-3-70B-Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
Device 0: NVIDIA TITAN RTX, compute capability 7.5, VMM: yes
Device 1: NVIDIA TITAN RTX, compute capability 7.5, VMM: yes
llm_load_tensors: ggml ctx size = 1.01 MiB
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 81/81 layers to GPU
llm_load_tensors: CPU buffer size = 563.62 MiB
llm_load_tensors: CUDA0 buffer size = 20038.81 MiB
llm_load_tensors: CUDA1 buffer size = 19940.67 MiB
...................................................................................................
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 1312.00 MiB
llama_kv_cache_init: CUDA1 KV buffer size = 1248.00 MiB
llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 0.98 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
llama_new_context_with_model: CUDA0 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA1 compute buffer size = 1216.02 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 80.02 MiB
llama_new_context_with_model: graph nodes = 2566
llama_new_context_with_model: graph splits = 3
ggml_backend_cuda_graph_compute: disabling CUDA graphs due to GPU architecture
ggml_backend_cuda_graph_compute: disabling CUDA graphs due to GPU architecture
INFO [ init] initializing slots | tid="140529722863616" timestamp=1719340152 n_slots=1
INFO [ init] new slot | tid="140529722863616" timestamp=1719340152 id_slot=0 n_ctx_slot=8192
INFO [ main] model loaded | tid="140529722863616" timestamp=1719340152
INFO [ main] chat template | tid="140529722863616" timestamp=1719340152 chat_example="<|start_header_id|>system<|end_header_id|>\n\nYou are a helpful assistant<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nHello<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\nHi there<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nHow are you?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" built_in=true
INFO [ main] HTTP server listening | tid="140529722863616" timestamp=1719340152 n_threads_http="63" port="18443" hostname="localhost"
INFO [ update_slots] all slots are idle | tid="140529722863616" timestamp=1719340152
INFO [ launch_slot_with_task] slot is processing task | tid="140529722863616" timestamp=1719340154 id_slot=0 id_task=0
INFO [ update_slots] kv cache rm [p0, end) | tid="140529722863616" timestamp=1719340154 id_slot=0 id_task=0 p0=0
ggml_backend_cuda_graph_compute: disabling CUDA graphs due to GPU architecture
CUDA error: misaligned address
current device: 0, in function launch_mul_mat_q at /XXX/llama.cpp/ggml-cuda/template-instances/../mmq.cuh:2452
cudaFuncSetAttribute(mul_mat_q<type, mmq_x, 8, false>, cudaFuncAttributeMaxDynamicSharedMemorySize, shmem)
GGML_ASSERT: /XXX/llama.cpp/ggml-cuda.cu:100: !"CUDA error"
Aborted
Yes I am having the exact same problem on windows 10. It appears to happen during prompt processing batch sizes >= 16
I am not able to reproduce the issue. Can you post the last few hundred lines that you get when you prepend the crashing command with compute sanitizer
? (Found under /opt/cuda/extras/compute-sanitizer//compute-sanitizer
on my system.)
Do other models crash as well? In particular, do non-k-quants models crash?
Here is the end of the log. This is from 925c309.
I'll also mention that I tried this version on another multi-GPU machine with different GPUs (V100) but the same Ubuntu (20.04) and Nvidia driver (555), and got no errors.
========= Invalid __shared__ read of size 16 bytes
========= at void mul_mat_q<(ggml_type)14, (int)64, (int)8, (bool)0>(const char *, const char *, float *, float *, int, int, int, int, int, int, int)+0xdb60
========= by thread (16,2,0) in block (36,0,0)
========= Address 0x4a08 is misaligned
========= Saved host backtrace up to driver entry point at kernel launch time
========= Host Frame: [0x2c9def]
========= in /lib/x86_64-linux-gnu/libcuda.so.1
========= Host Frame: [0x15a13]
========= in /usr/local/cuda-12.5/targets/x86_64-linux/lib/libcudart.so.12
========= Host Frame:cudaLaunchKernel [0x75750]
========= in /usr/local/cuda-12.5/targets/x86_64-linux/lib/libcudart.so.12
========= Host Frame:cudaError cudaLaunchKernel<char>(char const*, dim3, dim3, void**, unsigned long, CUstream_st*) [0x400a02]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:__device_stub__Z9mul_mat_qIL9ggml_type14ELi64ELi8ELb0EEvPKcS2_PfS3_iiiiiii(char const*, char const*, float*, float*, int, int, int, int, int, int, int) [0x3f963b]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:void __wrapper__device_stub_mul_mat_q<(ggml_type)14, 64, 8, false>(char const* restrict&, char const* restrict&, float* restrict&, float* restrict&, int const&, int const&, int const&, int const&, int const&, int const&, int const&) [0x3f96f0]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:void mul_mat_q<(ggml_type)14, 64, 8, false>(char const*, char const*, float*, float*, int, int, int, int, int, int, int) [0x40148f]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:void launch_mul_mat_q<(ggml_type)14, 64>(ggml_backend_cuda_context&, mmq_args const&, CUstream_st*) [0x405d4a]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:void mul_mat_q_case<(ggml_type)14>(ggml_backend_cuda_context&, mmq_args const&, CUstream_st*) [0x40a025]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:ggml_cuda_op_mul_mat_q(ggml_backend_cuda_context&, ggml_tensor const*, ggml_tensor const*, ggml_tensor*, char const*, float const*, char const*, float*, long, long, long, long, CUstream_st*) [0x2c0216]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:ggml_cuda_op_mul_mat(ggml_backend_cuda_context&, ggml_tensor const*, ggml_tensor const*, ggml_tensor*, void (*)(ggml_backend_cuda_context&, ggml_tensor const*, ggml_tensor const*, ggml_tensor*, char const*, float const*, char const*, float*, long, long, long, long, CUstream_st*), void (*)(float const*, void*, long, long, long, long, ggml_type, CUstream_st*)) [0x26419b]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:ggml_cuda_mul_mat(ggml_backend_cuda_context&, ggml_tensor const*, ggml_tensor const*, ggml_tensor*) [0x2661c9]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:ggml_cuda_compute_forward(ggml_backend_cuda_context&, ggml_tensor*) [0x267562]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:ggml_backend_cuda_graph_compute(ggml_backend*, ggml_cgraph*) [0x268b4d]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:ggml_backend_sched_graph_compute_async [0x22a4c5]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:llama_decode [0x140579]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:server_context::update_slots() [0xb374a]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:server_queue::start_loop() [0xa1d3b]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:main [0x3c9ae]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:__libc_start_main [0x24082]
========= in /lib/x86_64-linux-gnu/libc.so.6
========= Host Frame:_start [0x44abd]
========= in /opt/llama.cpp-925c309/bin/llama-server
=========
========= Invalid __shared__ read of size 16 bytes
========= at void mul_mat_q<(ggml_type)14, (int)64, (int)8, (bool)0>(const char *, const char *, float *, float *, int, int, int, int, int, int, int)+0xdb60
========= by thread (17,2,0) in block (36,0,0)
========= Address 0x4b38 is misaligned
========= Saved host backtrace up to driver entry point at kernel launch time
========= Host Frame: [0x2c9def]
========= in /lib/x86_64-linux-gnu/libcuda.so.1
========= Host Frame: [0x15a13]
========= in /usr/local/cuda-12.5/targets/x86_64-linux/lib/libcudart.so.12
========= Host Frame:cudaLaunchKernel [0x75750]
========= in /usr/local/cuda-12.5/targets/x86_64-linux/lib/libcudart.so.12
========= Host Frame:cudaError cudaLaunchKernel<char>(char const*, dim3, dim3, void**, unsigned long, CUstream_st*) [0x400a02]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:__device_stub__Z9mul_mat_qIL9ggml_type14ELi64ELi8ELb0EEvPKcS2_PfS3_iiiiiii(char const*, char const*, float*, float*, int, int, int, int, int, int, int) [0x3f963b]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:void __wrapper__device_stub_mul_mat_q<(ggml_type)14, 64, 8, false>(char const* restrict&, char const* restrict&, float* restrict&, float* restrict&, int const&, int const&, int const&, int const&, int const&, int const&, int const&) [0x3f96f0]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:void mul_mat_q<(ggml_type)14, 64, 8, false>(char const*, char const*, float*, float*, int, int, int, int, int, int, int) [0x40148f]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:void launch_mul_mat_q<(ggml_type)14, 64>(ggml_backend_cuda_context&, mmq_args const&, CUstream_st*) [0x405d4a]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:void mul_mat_q_case<(ggml_type)14>(ggml_backend_cuda_context&, mmq_args const&, CUstream_st*) [0x40a025]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:ggml_cuda_op_mul_mat_q(ggml_backend_cuda_context&, ggml_tensor const*, ggml_tensor const*, ggml_tensor*, char const*, float const*, char const*, float*, long, long, long, long, CUstream_st*) [0x2c0216]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:ggml_cuda_op_mul_mat(ggml_backend_cuda_context&, ggml_tensor const*, ggml_tensor const*, ggml_tensor*, void (*)(ggml_backend_cuda_context&, ggml_tensor const*, ggml_tensor const*, ggml_tensor*, char const*, float const*, char const*, float*, long, long, long, long, CUstream_st*), void (*)(float const*, void*, long, long, long, long, ggml_type, CUstream_st*)) [0x26419b]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:ggml_cuda_mul_mat(ggml_backend_cuda_context&, ggml_tensor const*, ggml_tensor const*, ggml_tensor*) [0x2661c9]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:ggml_cuda_compute_forward(ggml_backend_cuda_context&, ggml_tensor*) [0x267562]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:ggml_backend_cuda_graph_compute(ggml_backend*, ggml_cgraph*) [0x268b4d]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:ggml_backend_sched_graph_compute_async [0x22a4c5]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:llama_decode [0x140579]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:server_context::update_slots() [0xb374a]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:server_queue::start_loop() [0xa1d3b]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:main [0x3c9ae]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:__libc_start_main [0x24082]
========= in /lib/x86_64-linux-gnu/libc.so.6
========= Host Frame:_start [0x44abd]
========= in /opt/llama.cpp-925c309/bin/llama-server
=========
========= Invalid __shared__ read of size 16 bytes
========= at void mul_mat_q<(ggml_type)14, (int)64, (int)8, (bool)0>(const char *, const char *, float *, float *, int, int, int, int, int, int, int)+0xdb60
========= by thread (18,2,0) in block (36,0,0)
========= Address 0x4c68 is misaligned
========= Saved host backtrace up to driver entry point at kernel launch time
========= Host Frame: [0x2c9def]
========= in /lib/x86_64-linux-gnu/libcuda.so.1
========= Host Frame: [0x15a13]
========= in /usr/local/cuda-12.5/targets/x86_64-linux/lib/libcudart.so.12
========= Host Frame:cudaLaunchKernel [0x75750]
========= in /usr/local/cuda-12.5/targets/x86_64-linux/lib/libcudart.so.12
========= Host Frame:cudaError cudaLaunchKernel<char>(char const*, dim3, dim3, void**, unsigned long, CUstream_st*) [0x400a02]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:__device_stub__Z9mul_mat_qIL9ggml_type14ELi64ELi8ELb0EEvPKcS2_PfS3_iiiiiii(char const*, char const*, float*, float*, int, int, int, int, int, int, int) [0x3f963b]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:void __wrapper__device_stub_mul_mat_q<(ggml_type)14, 64, 8, false>(char const* restrict&, char const* restrict&, float* restrict&, float* restrict&, int const&, int const&, int const&, int const&, int const&, int const&, int const&) [0x3f96f0]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:void mul_mat_q<(ggml_type)14, 64, 8, false>(char const*, char const*, float*, float*, int, int, int, int, int, int, int) [0x40148f]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:void launch_mul_mat_q<(ggml_type)14, 64>(ggml_backend_cuda_context&, mmq_args const&, CUstream_st*) [0x405d4a]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:void mul_mat_q_case<(ggml_type)14>(ggml_backend_cuda_context&, mmq_args const&, CUstream_st*) [0x40a025]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:ggml_cuda_op_mul_mat_q(ggml_backend_cuda_context&, ggml_tensor const*, ggml_tensor const*, ggml_tensor*, char const*, float const*, char const*, float*, long, long, long, long, CUstream_st*) [0x2c0216]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:ggml_cuda_op_mul_mat(ggml_backend_cuda_context&, ggml_tensor const*, ggml_tensor const*, ggml_tensor*, void (*)(ggml_backend_cuda_context&, ggml_tensor const*, ggml_tensor const*, ggml_tensor*, char const*, float const*, char const*, float*, long, long, long, long, CUstream_st*), void (*)(float const*, void*, long, long, long, long, ggml_type, CUstream_st*)) [0x26419b]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:ggml_cuda_mul_mat(ggml_backend_cuda_context&, ggml_tensor const*, ggml_tensor const*, ggml_tensor*) [0x2661c9]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:ggml_cuda_compute_forward(ggml_backend_cuda_context&, ggml_tensor*) [0x267562]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:ggml_backend_cuda_graph_compute(ggml_backend*, ggml_cgraph*) [0x268b4d]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:ggml_backend_sched_graph_compute_async [0x22a4c5]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:llama_decode [0x140579]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:server_context::update_slots() [0xb374a]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:server_queue::start_loop() [0xa1d3b]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:main [0x3c9ae]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:__libc_start_main [0x24082]
========= in /lib/x86_64-linux-gnu/libc.so.6
========= Host Frame:_start [0x44abd]
========= in /opt/llama.cpp-925c309/bin/llama-server
=========
========= Invalid __shared__ read of size 16 bytes
========= at void mul_mat_q<(ggml_type)14, (int)64, (int)8, (bool)0>(const char *, const char *, float *, float *, int, int, int, int, int, int, int)+0xdb60
========= by thread (19,2,0) in block (36,0,0)
========= Address 0x4d98 is misaligned
========= Saved host backtrace up to driver entry point at kernel launch time
========= Host Frame: [0x2c9def]
========= in /lib/x86_64-linux-gnu/libcuda.so.1
========= Host Frame: [0x15a13]
========= in /usr/local/cuda-12.5/targets/x86_64-linux/lib/libcudart.so.12
========= Host Frame:cudaLaunchKernel [0x75750]
========= in /usr/local/cuda-12.5/targets/x86_64-linux/lib/libcudart.so.12
========= Host Frame:cudaError cudaLaunchKernel<char>(char const*, dim3, dim3, void**, unsigned long, CUstream_st*) [0x400a02]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:__device_stub__Z9mul_mat_qIL9ggml_type14ELi64ELi8ELb0EEvPKcS2_PfS3_iiiiiii(char const*, char const*, float*, float*, int, int, int, int, int, int, int) [0x3f963b]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:void __wrapper__device_stub_mul_mat_q<(ggml_type)14, 64, 8, false>(char const* restrict&, char const* restrict&, float* restrict&, float* restrict&, int const&, int const&, int const&, int const&, int const&, int const&, int const&) [0x3f96f0]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:void mul_mat_q<(ggml_type)14, 64, 8, false>(char const*, char const*, float*, float*, int, int, int, int, int, int, int) [0x40148f]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:void launch_mul_mat_q<(ggml_type)14, 64>(ggml_backend_cuda_context&, mmq_args const&, CUstream_st*) [0x405d4a]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:void mul_mat_q_case<(ggml_type)14>(ggml_backend_cuda_context&, mmq_args const&, CUstream_st*) [0x40a025]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:ggml_cuda_op_mul_mat_q(ggml_backend_cuda_context&, ggml_tensor const*, ggml_tensor const*, ggml_tensor*, char const*, float const*, char const*, float*, long, long, long, long, CUstream_st*) [0x2c0216]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:ggml_cuda_op_mul_mat(ggml_backend_cuda_context&, ggml_tensor const*, ggml_tensor const*, ggml_tensor*, void (*)(ggml_backend_cuda_context&, ggml_tensor const*, ggml_tensor const*, ggml_tensor*, char const*, float const*, char const*, float*, long, long, long, long, CUstream_st*), void (*)(float const*, void*, long, long, long, long, ggml_type, CUstream_st*)) [0x26419b]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:ggml_cuda_mul_mat(ggml_backend_cuda_context&, ggml_tensor const*, ggml_tensor const*, ggml_tensor*) [0x2661c9]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:ggml_cuda_compute_forward(ggml_backend_cuda_context&, ggml_tensor*) [0x267562]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:ggml_backend_cuda_graph_compute(ggml_backend*, ggml_cgraph*) [0x268b4d]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:ggml_backend_sched_graph_compute_async [0x22a4c5]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:llama_decode [0x140579]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:server_context::update_slots() [0xb374a]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:server_queue::start_loop() [0xa1d3b]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:main [0x3c9ae]
========= in /opt/llama.cpp-925c309/bin/llama-server
========= Host Frame:__libc_start_main [0x24082]
========= in /lib/x86_64-linux-gnu/libc.so.6
========= Host Frame:_start [0x44abd]
========= in /opt/llama.cpp-925c309/bin/llama-server
=========
CUDA error: unspecified launch failure
current device: 0, in function ggml_cuda_op_mul_mat at /XXX/llama.cpp/ggml-cuda.cu:1606
cudaGetLastError()
GGML_ASSERT: /XXX/llama.cpp/ggml-cuda.cu:100: !"CUDA error"
========= Error: process didn't terminate successfully
========= Target application returned an error
========= ERROR SUMMARY: 4914 errors
========= ERROR SUMMARY: 4814 errors were not printed. Use --print-limit option to adjust the number of printed errors
phi-3-mini-4k-instruct-q4_k.gguf
crashes but phi-3-mini-4k-instruct-f16.gguf
does not.
Please confirm whether or not this fix works: https://github.com/ggerganov/llama.cpp/pull/8123 .
Looks like #8123 fixes it, thanks.
What happened?
llama-server is crashing repeatably with a GGML CUDA error on commit a818f30 and later. d62e4aa and earlier work correctly. I have not been able to reproduce this with llama-cli.
/opt/llama.cpp-a818f30/bin/llama-server --host localhost --port 18443 --n-gpu-layers 81 --ctx-size 8192 --model meta-llama-3-70b-instruct-q4_k.gguf
In addition to the log I posted, I also tried launching on a single GPU with only one GPU layer, but the result is the same.
CUDA_VISIBLE_DEVICES=0 /opt/llama.cpp-a818f30/bin/llama-server --host localhost --port 18443 --n-gpu-layers 1 --ctx-size 8192 --model meta-llama-3-70b-instruct-q4_k.gguf
Even zero GPU layers will cause a crash.
CUDA_VISIBLE_DEVICES=0 /opt/llama.cpp-a818f30/bin/llama-server --host localhost --port 18443 --n-gpu-layers 0 --ctx-size 8192 --model meta-llama-3-70b-instruct-q4_k.gguf
This may be related to #8096 @JohannesGaessler
Name and Version
$ /opt/llama.cpp-a818f30/bin/llama-server --version version: 3216 (a818f302) built with cc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 for x86_64-linux-gnu
What operating system are you seeing the problem on?
Linux
Relevant log output