intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, MiniCPM, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, GraphRAG, DeepSpeed, vLLM, FastChat, Axolotl, etc.
Apache License 2.0
6.49k stars 1.24k forks source link

Can ipex-llm[cpp] support the bge-m3 model? #11883

Open jianjungu opened 3 weeks ago

jianjungu commented 3 weeks ago

The official ollama supports this model in v0.3.4 https://github.com/ollama/ollama/releases/tag/v0.3.4

Tried with ollama in 2.1.0b20240820, but failed with 0xc0000005

time=2024-08-21T13:10:17.961+08:00 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=25 memory.available="40.5 GiB" memory.required.full="1.1 GiB" memory.required.partial="1.1 GiB" memory.required.kv="12.0 MiB" memory.weights.total="1.0 GiB" memory.weights.repeating="577.2 MiB" memory.weights.nonrepeating="488.3 MiB" memory.graph.full="32.0 MiB" memory.graph.partial="32.0 MiB" time=2024-08-21T13:10:17.963+08:00 level=INFO source=server.go:342 msg="starting llama server" cmd="C:\\Users\\intel\\ipex-llm-ollama_\\dist\\windows-amd64\\ollama_runners\\cpu_avx2\\ollama_llama_server.exe --model C:\\Users\\intel\\.ollama\\models\\blobs\\sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 999 --parallel 1 --port 55829" time=2024-08-21T13:10:17.966+08:00 level=INFO source=sched.go:338 msg="loaded runners" count=1 time=2024-08-21T13:10:17.966+08:00 level=INFO source=server.go:529 msg="waiting for llama runner to start responding" time=2024-08-21T13:10:17.966+08:00 level=INFO source=server.go:566 msg="waiting for server to become available" status="llm server error" INFO [wmain] build info | build=1 commit="f6b084d" tid="39304" timestamp=1724217017 INFO [wmain] system info | n_threads=12 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="39304" timestamp=1724217017 total_threads=24 INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="23" port="55829" tid="39304" timestamp=1724217017 llama_model_loader: loaded meta data with 33 key-value pairs and 389 tensors from C:\Users\intel\.ollama\models\blobs\sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = bert llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.size_label str = 567M llama_model_loader: - kv 3: general.license str = mit llama_model_loader: - kv 4: general.tags arr[str,4] = ["sentence-transformers", "feature-ex... llama_model_loader: - kv 5: bert.block_count u32 = 24 llama_model_loader: - kv 6: bert.context_length u32 = 8192 llama_model_loader: - kv 7: bert.embedding_length u32 = 1024 llama_model_loader: - kv 8: bert.feed_forward_length u32 = 4096 llama_model_loader: - kv 9: bert.attention.head_count u32 = 16 llama_model_loader: - kv 10: bert.attention.layer_norm_epsilon f32 = 0.000010 llama_model_loader: - kv 11: general.file_type u32 = 1 llama_model_loader: - kv 12: bert.attention.causal bool = false llama_model_loader: - kv 13: bert.pooling_type u32 = 2 llama_model_loader: - kv 14: tokenizer.ggml.model str = t5 llama_model_loader: - kv 15: tokenizer.ggml.pre str = default llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,250002] = ["<s>", "<pad>", "</s>", "<unk>", ","... llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,250002] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,250002] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 19: tokenizer.ggml.add_space_prefix bool = true llama_model_loader: - kv 20: tokenizer.ggml.token_type_count u32 = 1 llama_model_loader: - kv 21: tokenizer.ggml.remove_extra_whitespaces bool = true llama_model_loader: - kv 22: tokenizer.ggml.precompiled_charsmap arr[u8,237539] = [0, 180, 2, 0, 0, 132, 0, 0, 0, 0, 0,... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 0 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 25: tokenizer.ggml.unknown_token_id u32 = 3 llama_model_loader: - kv 26: tokenizer.ggml.seperator_token_id u32 = 2 llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 1 llama_model_loader: - kv 28: tokenizer.ggml.cls_token_id u32 = 0 llama_model_loader: - kv 29: tokenizer.ggml.mask_token_id u32 = 250001 llama_model_loader: - kv 30: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 31: tokenizer.ggml.add_eos_token bool = true llama_model_loader: - kv 32: general.quantization_version u32 = 2 llama_model_loader: - type f32: 244 tensors llama_model_loader: - type f16: 145 tensors llm_load_vocab: unknown tokenizer: 't5'llm_load_vocab: using default tokenizer: 'llama'llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = bert llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 250002 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 8192 llm_load_print_meta: n_embd = 1024 llm_load_print_meta: n_head = 16 llm_load_print_meta: n_head_kv = 16 llm_load_print_meta: n_layer = 24 llm_load_print_meta: n_rot = 64 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 64 llm_load_print_meta: n_embd_head_v = 64 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 1.0e-05 llm_load_print_meta: f_norm_rms_eps = 0.0e+00 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 4096 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 0 llm_load_print_meta: pooling type = 2 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 8192 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 335M llm_load_print_meta: model ftype = F16 llm_load_print_meta: model params = 566.70 M llm_load_print_meta: model size = 1.07 GiB (16.25 BPW) llm_load_print_meta: general.name = n/a time=2024-08-21T13:10:18.226+08:00 level=ERROR source=sched.go:344 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000005 " [GIN] 2024/08/21 - 13:10:18 | 500 | 1.5625998s | 127.0.0.1 | POST "/api/embeddings"

rnwang04 commented 2 weeks ago

Hi @jianjungu, ipex-llm‘s ollama is upgrade to 0.3.6 with ipex-llm[cpp]>=2.2.0b20240827, you may have a try with it 😊