ollama / ollama

Get up and running with Llama 3.1, Mistral, Gemma 2, and other large language models.
https://ollama.com
MIT License
88.92k stars 6.95k forks source link

error loading model "Xiaobu Embedding v2" :error="llama runner process has terminated: signal: segmentation fault (core dumped)" #6434

Closed AAEE86 closed 1 week ago

AAEE86 commented 3 weeks ago

What is the issue?

ollama-1 | time=2024-08-20T02:46:33.204Z level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=25 layers.offload=0 layers.split="" memory.available="[22.2 GiB]" memory.required.full="820.5 MiB" memory.required.partial="0 B" memory.required.kv="48.0 MiB" memory.required.allocations="[820.5 MiB]" memory.weights.total="625.2 MiB" memory.weights.repeating="584.0 MiB" memory.weights.nonrepeating="41.3 MiB" memory.graph.full="128.0 MiB" memory.graph.partial="128.0 MiB" ollama-1 | time=2024-08-20T02:46:33.206Z level=INFO source=server.go:393 msg="starting llama server" cmd="/tmp/ollama1960294902/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-85df6dbe02a3bfb67f24400c4d56ba8bd1a8a19a14450761b65ce17fe1d5064a --ctx-size 8192 --batch-size 512 --embedding --log-disable --no-mmap --parallel 4 --port 46451" ollama-1 | time=2024-08-20T02:46:33.207Z level=INFO source=sched.go:445 msg="loaded runners" count=1 ollama-1 | time=2024-08-20T02:46:33.207Z level=INFO source=server.go:593 msg="waiting for llama runner to start responding" ollama-1 | time=2024-08-20T02:46:33.207Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server error" ollama-1 | INFO [main] build info | build=1 commit="1e6f655" tid="127020122728320" timestamp=1724121993 ollama-1 | INFO [main] system info | n_threads=16 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="127020122728320" timestamp=1724121993 total_threads=32 ollama-1 | INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="31" port="46451" tid="127020122728320" timestamp=1724121993 ollama-1 | llama_model_loader: loaded meta data with 27 key-value pairs and 389 tensors from /root/.ollama/models/blobs/sha256-85df6dbe02a3bfb67f24400c4d56ba8bd1a8a19a14450761b65ce17fe1d5064a (version GGUF V3 (latest)) ollama-1 | llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. ollama-1 | llama_model_loader: - kv 0: general.architecture str = bert ollama-1 | llama_model_loader: - kv 1: general.type str = model ollama-1 | llama_model_loader: - kv 2: general.name str = Xiaobu Embedding v2 ollama-1 | llama_model_loader: - kv 3: general.version str = v2 ollama-1 | llama_model_loader: - kv 4: general.basename str = xiaobu-embedding ollama-1 | llama_model_loader: - kv 5: general.size_label str = 324M ollama-1 | llama_model_loader: - kv 6: general.tags arr[str,1] = ["mteb"] ollama-1 | llama_model_loader: - kv 7: bert.block_count u32 = 24 ollama-1 | llama_model_loader: - kv 8: bert.context_length u32 = 512 ollama-1 | llama_model_loader: - kv 9: bert.embedding_length u32 = 1024 ollama-1 | llama_model_loader: - kv 10: bert.feed_forward_length u32 = 4096 ollama-1 | llama_model_loader: - kv 11: bert.attention.head_count u32 = 16 ollama-1 | llama_model_loader: - kv 12: bert.attention.layer_norm_epsilon f32 = 0.000000 ollama-1 | llama_model_loader: - kv 13: general.file_type u32 = 1 ollama-1 | llama_model_loader: - kv 14: bert.attention.causal bool = false ollama-1 | llama_model_loader: - kv 15: bert.pooling_type u32 = 1 ollama-1 | llama_model_loader: - kv 16: tokenizer.ggml.token_type_count u32 = 2 ollama-1 | llama_model_loader: - kv 17: tokenizer.ggml.model str = bert ollama-1 | llama_model_loader: - kv 18: tokenizer.ggml.pre str = xiaobu ollama-1 | llama_model_loader: - kv 19: tokenizer.ggml.tokens arr[str,21128] = ["[PAD]", "[unused1]", "[unused2]", "... ollama-1 | llama_model_loader: - kv 20: tokenizer.ggml.token_type arr[i32,21128] = [3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ollama-1 | llama_model_loader: - kv 21: tokenizer.ggml.unknown_token_id u32 = 100 ollama-1 | llama_model_loader: - kv 22: tokenizer.ggml.seperator_token_id u32 = 102 ollama-1 | llama_model_loader: - kv 23: tokenizer.ggml.padding_token_id u32 = 0 ollama-1 | llama_model_loader: - kv 24: tokenizer.ggml.cls_token_id u32 = 101 ollama-1 | llama_model_loader: - kv 25: tokenizer.ggml.mask_token_id u32 = 103 ollama-1 | llama_model_loader: - kv 26: general.quantization_version u32 = 2 ollama-1 | llama_model_loader: - type f32: 244 tensors ollama-1 | llama_model_loader: - type f16: 145 tensors ollama-1 | llm_load_vocab: special tokens cache size = 5 ollama-1 | llm_load_vocab: token to piece cache size = 0.0769 MB ollama-1 | llm_load_print_meta: format = GGUF V3 (latest) ollama-1 | llm_load_print_meta: arch = bert ollama-1 | llm_load_print_meta: vocab type = WPM ollama-1 | llm_load_print_meta: n_vocab = 21128 ollama-1 | llm_load_print_meta: n_merges = 0 ollama-1 | llm_load_print_meta: vocab_only = 0 ollama-1 | llm_load_print_meta: n_ctx_train = 512 ollama-1 | llm_load_print_meta: n_embd = 1024 ollama-1 | llm_load_print_meta: n_layer = 24 ollama-1 | llm_load_print_meta: n_head = 16 ollama-1 | llm_load_print_meta: n_head_kv = 16 ollama-1 | llm_load_print_meta: n_rot = 64 ollama-1 | llm_load_print_meta: n_swa = 0 ollama-1 | llm_load_print_meta: n_embd_head_k = 64 ollama-1 | llm_load_print_meta: n_embd_head_v = 64 ollama-1 | llm_load_print_meta: n_gqa = 1 ollama-1 | llm_load_print_meta: n_embd_k_gqa = 1024 ollama-1 | llm_load_print_meta: n_embd_v_gqa = 1024 ollama-1 | llm_load_print_meta: f_norm_eps = 1.0e-12 ollama-1 | llm_load_print_meta: f_norm_rms_eps = 0.0e+00 ollama-1 | llm_load_print_meta: f_clamp_kqv = 0.0e+00 ollama-1 | llm_load_print_meta: f_max_alibi_bias = 0.0e+00 ollama-1 | llm_load_print_meta: f_logit_scale = 0.0e+00 ollama-1 | llm_load_print_meta: n_ff = 4096 ollama-1 | llm_load_print_meta: n_expert = 0 ollama-1 | llm_load_print_meta: n_expert_used = 0 ollama-1 | llm_load_print_meta: causal attn = 0 ollama-1 | llm_load_print_meta: pooling type = 1 ollama-1 | llm_load_print_meta: rope type = 2 ollama-1 | llm_load_print_meta: rope scaling = linear ollama-1 | llm_load_print_meta: freq_base_train = 10000.0 ollama-1 | llm_load_print_meta: freq_scale_train = 1 ollama-1 | llm_load_print_meta: n_ctx_orig_yarn = 512 ollama-1 | llm_load_print_meta: rope_finetuned = unknown ollama-1 | llm_load_print_meta: ssm_d_conv = 0 ollama-1 | llm_load_print_meta: ssm_d_inner = 0 ollama-1 | llm_load_print_meta: ssm_d_state = 0 ollama-1 | llm_load_print_meta: ssm_dt_rank = 0 ollama-1 | llm_load_print_meta: model type = 335M ollama-1 | llm_load_print_meta: model ftype = F16 ollama-1 | llm_load_print_meta: model params = 324.47 M ollama-1 | llm_load_print_meta: model size = 620.50 MiB (16.04 BPW) ollama-1 | llm_load_print_meta: general.name = Xiaobu Embedding v2 ollama-1 | llm_load_print_meta: UNK token = 100 '[UNK]' ollama-1 | llm_load_print_meta: SEP token = 102 '[SEP]' ollama-1 | llm_load_print_meta: PAD token = 0 '[PAD]' ollama-1 | llm_load_print_meta: CLS token = 101 '[CLS]' ollama-1 | llm_load_print_meta: MASK token = 103 '[MASK]' ollama-1 | llm_load_print_meta: LF token = 0 '[PAD]' ollama-1 | llm_load_print_meta: max token length = 48 ollama-1 | llm_load_tensors: ggml ctx size = 0.16 MiB ollama-1 | llm_load_tensors: CPU buffer size = 620.50 MiB ollama-1 | time=2024-08-20T02:46:33.458Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server loading model" ollama-1 | llama_new_context_with_model: n_ctx = 8192 ollama-1 | llama_new_context_with_model: n_batch = 512 ollama-1 | llama_new_context_with_model: n_ubatch = 512 ollama-1 | llama_new_context_with_model: flash_attn = 0 ollama-1 | llama_new_context_with_model: freq_base = 10000.0 ollama-1 | llama_new_context_with_model: freq_scale = 1 ollama-1 | llama_kv_cache_init: CPU KV buffer size = 768.00 MiB ollama-1 | llama_new_context_with_model: KV self size = 768.00 MiB, K (f16): 384.00 MiB, V (f16): 384.00 MiB ollama-1 | llama_new_context_with_model: CPU output buffer size = 0.00 MiB ollama-1 | llama_new_context_with_model: CPU compute buffer size = 25.01 MiB ollama-1 | llama_new_context_with_model: graph nodes = 851 ollama-1 | llama_new_context_with_model: graph splits = 1 ollama-1 | time=2024-08-20T02:46:34.161Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server not responding" ollama-1 | time=2024-08-20T02:46:35.314Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server error" ollama-1 | [GIN] 2024/08/20 - 02:46:36 | 500 | 3.116673729s | 172.17.0.1 | POST "/v1/embeddings" ollama-1 | time=2024-08-20T02:46:36.317Z level=ERROR source=sched.go:451 msg="error loading llama server" error="llama runner process has terminated: signal: segmentation fault (core dumped)"

OS

Docker

GPU

Other

CPU

Intel

Ollama version

0.3.5

mxyng commented 3 weeks ago

Can you link where you got the model?

AAEE86 commented 3 weeks ago

Can you link where you got the model?

https://ollama.com/search?q=xiaobu I have tried both of these models and they both report the same error.

mxyng commented 2 weeks ago

Thanks for the link. I'm able to reproduce it. I'll keep you posted

Lagstill commented 2 weeks ago

Same for me with llama3.1 on docker

mxyng commented 2 weeks ago

After some investigation, it seems to be an issue specific to this model (xiaobu embedding v2). For some reason, llama.cpp segfaults accessing inp_embd data around 50% of the time. Not sure what the root cause is. The tensor seems to be initialized correctly

You might have some luck cross posting this to the llama.cpp GitHub

AAEE86 commented 2 weeks ago

After some investigation, it seems to be an issue specific to this model (xiaobu embedding v2). For some reason, llama.cpp segfaults accessing inp_embd data around 50% of the time. Not sure what the root cause is. The tensor seems to be initialized correctly

You might have some luck cross posting this to the llama.cpp GitHub

Thanks for the reply, I'll try to submit this to the llama.cpp GitHub.

AAEE86 commented 2 weeks ago

After some investigation, it seems to be an issue specific to this model (xiaobu embedding v2). For some reason, llama.cpp segfaults accessing inp_embd data around 50% of the time. Not sure what the root cause is. The tensor seems to be initialized correctly

You might have some luck cross posting this to the llama.cpp GitHub

After I installed the NVIDIA GPU, it stopped reporting errors.