OpenBMB / ollama

Get up and running with Llama 3, Mistral, Gemma, and other large language models.
https://ollama.com
MIT License
11 stars 5 forks source link

can't read file ./examples/minicpm-v2.5/slice_token_for_ollama.raw #7

Open leeaction opened 1 month ago

leeaction commented 1 month ago

What is the issue?

Hi:

我按照文档本地编译了Ollama, 将编译产出可执行文件ollama 软连到 /usr/local/bin下面

Ollama可以正常启动 模型也可以正常导入

我看在工程的examples/minicpm-v2.5/slice_token_for_ollama.raw这里有这个文件,我应该将这个文件放在什么位置?

但是在进行chat推理时报以下错误:

llm_load_vocab: missing pre-tokenizer type, using: 'default' May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_vocab: May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_vocab: **** May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_vocab: GENERATION QUALITY WILL BE DEGRADED! May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_vocab: CONSIDER REGENERATING THE MODEL May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_vocab: **** May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_vocab: May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_vocab: special tokens definition check successful ( 256/128256 ). May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: format = GGUF V3 (latest) May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: arch = llama May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: vocab type = BPE May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_vocab = 128256 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_merges = 280147 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_ctx_train = 8192 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_embd = 4096 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_head = 32 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_head_kv = 8 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_layer = 32 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_rot = 128 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_embd_head_k = 128 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_embd_head_v = 128 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_gqa = 4 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_embd_k_gqa = 1024 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_embd_v_gqa = 1024 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: f_norm_eps = 0.0e+00 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: f_logit_scale = 0.0e+00 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_ff = 14336 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_expert = 0 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_expert_used = 0 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: causal attn = 1 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: pooling type = 0 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: rope type = 0 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: rope scaling = linear May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: freq_base_train = 500000.0 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: freq_scale_train = 1 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_yarn_orig_ctx = 8192 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: rope_finetuned = unknown May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: ssm_d_conv = 0 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: ssm_d_inner = 0 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: ssm_d_state = 0 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: ssm_dt_rank = 0 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: model type = 8B May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: model ftype = Q4_0 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: model params = 8.03 B May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: model size = 4.33 GiB (4.64 BPW) May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: general.name = model May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: EOS token = 128001 '<|end_of_text|>' May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: UNK token = 128002 '' May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: PAD token = 0 '!' May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: LF token = 128 'Ä' May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: EOT token = 128009 '<|eot_id|>' May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_tensors: ggml ctx size = 0.30 MiB May 30 14:34:54 wbs-desktop ollama[651389]: llm_load_tensors: offloading 13 repeating layers to GPU May 30 14:34:54 wbs-desktop ollama[651389]: llm_load_tensors: offloaded 13/33 layers to GPU May 30 14:34:54 wbs-desktop ollama[651389]: llm_load_tensors: CPU buffer size = 4437.80 MiB May 30 14:34:54 wbs-desktop ollama[651389]: llm_load_tensors: CUDA0 buffer size = 1521.41 MiB May 30 14:34:55 wbs-desktop ollama[651389]: ....................................................................................... May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: n_ctx = 2048 May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: n_batch = 512 May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: n_ubatch = 512 May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: flash_attn = 0 May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: freq_base = 500000.0 May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: freq_scale = 1 May 30 14:34:55 wbs-desktop ollama[651389]: llama_kv_cache_init: CUDA_Host KV buffer size = 152.00 MiB May 30 14:34:55 wbs-desktop ollama[651389]: llama_kv_cache_init: CUDA0 KV buffer size = 104.00 MiB May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: CUDA_Host output buffer size = 0.50 MiB May 30 14:34:55 wbs-desktop ollama[656763]: [1717050895] warming up the model with an empty run May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: CUDA0 compute buffer size = 677.48 MiB May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: CUDA_Host compute buffer size = 12.01 MiB May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: graph nodes = 1030 May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: graph splits = 213 May 30 14:34:56 wbs-desktop ollama[656763]: INFO [main] model loaded | tid="139943102652416" timestamp=1717050896 May 30 14:34:56 wbs-desktop ollama[651389]: time=2024-05-30T14:34:56.488+08:00 level=INFO source=server.go:567 msg="llama runner started in 6.06 seconds" May 30 14:34:56 wbs-desktop ollama[656763]: [1717050896] slice_image: multiple 1 May 30 14:34:56 wbs-desktop ollama[656763]: [1717050896] May 30 14:34:56 wbs-desktop ollama[656763]: encode_image_with_clip: image encoded in 9.25 ms by clip_image_preprocess. May 30 14:34:56 wbs-desktop ollama[656763]: [1717050896] May 30 14:34:56 wbs-desktop ollama[656763]: encode_image_with_clip: mm_patch_merge_type is flat. May 30 14:34:56 wbs-desktop ollama[656763]: [1717050896] clip_image_build_graph: ctx->buf_compute_meta.size(): 884880 May 30 14:34:56 wbs-desktop ollama[656763]: [1717050896] clip_image_build_graph: load_image_size: 462 434 May 30 14:34:57 wbs-desktop ollama[656763]: [1717050897] encode_image_with_clip: image embedding created: 96 tokens May 30 14:34:57 wbs-desktop ollama[656763]: [1717050897] May 30 14:34:57 wbs-desktop ollama[656763]: encode_image_with_clip: image encoded in 1025.53 ms by CLIP ( 10.68 ms per image patch) May 30 14:34:57 wbs-desktop ollama[656763]: [1717050897] llava_image_embed_make_with_clip_img_ollama: can't read file ./examples/minicpm-v2.5/slice_token_for_ollama.raw

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

No response

tc-mb commented 1 month ago

What is the issue?

Hi:

我按照文档本地编译了Ollama, 将编译产出可执行文件ollama 软连到 /usr/local/bin下面

Ollama可以正常启动 模型也可以正常导入

我看在工程的examples/minicpm-v2.5/slice_token_for_ollama.raw这里有这个文件,我应该将这个文件放在什么位置?

但是在进行chat推理时报以下错误:

llm_load_vocab: missing pre-tokenizer type, using: 'default' May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_vocab: May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_vocab: **** May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_vocab: GENERATION QUALITY WILL BE DEGRADED! May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_vocab: CONSIDER REGENERATING THE MODEL May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_vocab: **** May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_vocab: May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_vocab: special tokens definition check successful ( 256/128256 ). May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: format = GGUF V3 (latest) May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: arch = llama May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: vocab type = BPE May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_vocab = 128256 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_merges = 280147 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_ctx_train = 8192 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_embd = 4096 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_head = 32 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_head_kv = 8 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_layer = 32 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_rot = 128 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_embd_head_k = 128 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_embd_head_v = 128 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_gqa = 4 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_embd_k_gqa = 1024 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_embd_v_gqa = 1024 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: f_norm_eps = 0.0e+00 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: f_logit_scale = 0.0e+00 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_ff = 14336 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_expert = 0 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_expert_used = 0 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: causal attn = 1 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: pooling type = 0 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: rope type = 0 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: rope scaling = linear May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: freq_base_train = 500000.0 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: freq_scale_train = 1 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_yarn_orig_ctx = 8192 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: rope_finetuned = unknown May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: ssm_d_conv = 0 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: ssm_d_inner = 0 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: ssm_d_state = 0 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: ssm_dt_rank = 0 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: model type = 8B May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: model ftype = Q4_0 May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: model params = 8.03 B May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: model size = 4.33 GiB (4.64 BPW) May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: general.name = model May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: EOS token = 128001 '<|end_of_text|>' May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: UNK token = 128002 '' May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: PAD token = 0 '!' May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: LF token = 128 'Ä' May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: EOT token = 128009 '<|eot_id|>' May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_tensors: ggml ctx size = 0.30 MiB May 30 14:34:54 wbs-desktop ollama[651389]: llm_load_tensors: offloading 13 repeating layers to GPU May 30 14:34:54 wbs-desktop ollama[651389]: llm_load_tensors: offloaded 13/33 layers to GPU May 30 14:34:54 wbs-desktop ollama[651389]: llm_load_tensors: CPU buffer size = 4437.80 MiB May 30 14:34:54 wbs-desktop ollama[651389]: llm_load_tensors: CUDA0 buffer size = 1521.41 MiB May 30 14:34:55 wbs-desktop ollama[651389]: ....................................................................................... May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: n_ctx = 2048 May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: n_batch = 512 May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: n_ubatch = 512 May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: flash_attn = 0 May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: freq_base = 500000.0 May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: freq_scale = 1 May 30 14:34:55 wbs-desktop ollama[651389]: llama_kv_cache_init: CUDA_Host KV buffer size = 152.00 MiB May 30 14:34:55 wbs-desktop ollama[651389]: llama_kv_cache_init: CUDA0 KV buffer size = 104.00 MiB May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: CUDA_Host output buffer size = 0.50 MiB May 30 14:34:55 wbs-desktop ollama[656763]: [1717050895] warming up the model with an empty run May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: CUDA0 compute buffer size = 677.48 MiB May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: CUDA_Host compute buffer size = 12.01 MiB May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: graph nodes = 1030 May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: graph splits = 213 May 30 14:34:56 wbs-desktop ollama[656763]: INFO [main] model loaded | tid="139943102652416" timestamp=1717050896 May 30 14:34:56 wbs-desktop ollama[651389]: time=2024-05-30T14:34:56.488+08:00 level=INFO source=server.go:567 msg="llama runner started in 6.06 seconds" May 30 14:34:56 wbs-desktop ollama[656763]: [1717050896] slice_image: multiple 1 May 30 14:34:56 wbs-desktop ollama[656763]: [1717050896] May 30 14:34:56 wbs-desktop ollama[656763]: encode_image_with_clip: image encoded in 9.25 ms by clip_image_preprocess. May 30 14:34:56 wbs-desktop ollama[656763]: [1717050896] May 30 14:34:56 wbs-desktop ollama[656763]: encode_image_with_clip: mm_patch_merge_type is flat. May 30 14:34:56 wbs-desktop ollama[656763]: [1717050896] clip_image_build_graph: ctx->buf_compute_meta.size(): 884880 May 30 14:34:56 wbs-desktop ollama[656763]: [1717050896] clip_image_build_graph: load_image_size: 462 434 May 30 14:34:57 wbs-desktop ollama[656763]: [1717050897] encode_image_with_clip: image embedding created: 96 tokens May 30 14:34:57 wbs-desktop ollama[656763]: [1717050897] May 30 14:34:57 wbs-desktop ollama[656763]: encode_image_with_clip: image encoded in 1025.53 ms by CLIP ( 10.68 ms per image patch) May 30 14:34:57 wbs-desktop ollama[656763]: [1717050897] llava_image_embed_make_with_clip_img_ollama: can't read file ./examples/minicpm-v2.5/slice_token_for_ollama.raw

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

No response

OK,我下周想个办法将这个依赖去掉。