OpenBMB / ollama

Get up and running with Llama 3, Mistral, Gemma, and other large language models.
https://ollama.com
MIT License
11 stars 5 forks source link

试用成本有点高,能否放出x86端的安装包? #10

Open lingyezhixing opened 1 month ago

lingyezhixing commented 1 month ago

如题,想使用还要先搭建编译环境,试用成本确实不小,能否放出几个常用端的安装包?

yuezhishun commented 1 month ago

llama.cpp的分支还没有被合并😂 整起来有点麻烦

hhao commented 1 month ago

如题,想使用还要先搭建编译环境,试用成本确实不小,能否放出几个常用端的安装包?

最新已经编译好x86的docker镜像,支持显卡运行。 参考 https://ollama.com/hhao/openbmb-minicpm-llama3-v-2_5

image.png

tc-mb commented 1 month ago

如题,想使用还要先搭建编译环境,试用成本确实不小,能否放出几个常用端的安装包?

目前分支还没有合并,这可能不太容易。 或许可以等我们的分支合并进官方再说。

tc-mb commented 1 month ago

如题,想使用还要先搭建编译环境,试用成本确实不小,能否放出几个常用端的安装包?

最新已经编译好x86的docker镜像,支持显卡运行。 参考 https://ollama.com/hhao/openbmb-minicpm-llama3-v-2_5

哇,感谢

chaoqunxie commented 1 month ago

如题,想使用还要先搭建编译环境,试用成本确实不小,能否放出几个常用端的安装包?

最新已经编译好x86的docker镜像,支持显卡运行。 参考 https://ollama.com/hhao/openbmb-minicpm-llama3-v-2_5

image.png

点赞 docker固然好 想要windows使用还是麻烦

hyphantom commented 3 weeks ago

如题,想使用还要先搭建编译环境,试用成本确实不小,能否放出几个常用端的安装包?

最新已经编译好x86的docker镜像,支持显卡运行。 参考 https://ollama.com/hhao/openbmb-minicpm-llama3-v-2_5

image.png

感谢您的镜像,我在win下的docker 正常运行,但是4int的模型无法利用显卡,我的显卡是4070m 8G。 我看见:msg="offload to gpu" layers.requested=-1 layers.real=33 memory.available="6.4 GiB" memory.required.full="5.8 GiB" memory.required.partial="5.8 GiB" memory.required.kv="256.0 MiB" memory.weights.total="4.3 GiB" memory.weights.repeating="3.9 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="677.5 MiB"

所以8g显存依然跑不了int4模型吗?

实际log为: 2024-06-09 12:53:02 INFO [main] system info | n_threads=16 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="140462431668096" timestamp=1717908782 total_threads=32 2024-06-09 12:53:02 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="31" port="35773" tid="140462431668096" timestamp=1717908782 2024-06-09 12:48:55 2024/06/09 04:48:55 routes.go:1028: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost: https://localhost: http://127.0.0.1 https://127.0.0.1 http://127.0.0.1: https://127.0.0.1: http://0.0.0.0 https://0.0.0.0 http://0.0.0.0: https://0.0.0.0:] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]" 2024-06-09 12:48:55 time=2024-06-09T04:48:55.523Z level=INFO source=images.go:729 msg="total blobs: 0" 2024-06-09 12:48:55 time=2024-06-09T04:48:55.523Z level=INFO source=images.go:736 msg="total unused blobs removed: 0" 2024-06-09 12:48:55 time=2024-06-09T04:48:55.523Z level=INFO source=routes.go:1074 msg="Listening on [::]:11434 (version b5000f5-dirty)" 2024-06-09 12:48:55 time=2024-06-09T04:48:55.524Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama2402259148/runners 2024-06-09 12:48:58 time=2024-06-09T04:48:58.631Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [rocm_v60002 cpu cpu_avx cpu_avx2 cuda_v11]" 2024-06-09 12:48:58 time=2024-06-09T04:48:58.632Z level=INFO source=types.go:71 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="15.5 GiB" available="12.2 GiB" 2024-06-09 12:53:02 time=2024-06-09T04:53:02.553Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=33 memory.available="6.4 GiB" memory.required.full="5.8 GiB" memory.required.partial="5.8 GiB" memory.required.kv="256.0 MiB" memory.weights.total="4.3 GiB" memory.weights.repeating="3.9 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="677.5 MiB" 2024-06-09 12:53:02 time=2024-06-09T04:53:02.553Z level=WARN source=server.go:227 msg="multimodal models don't support parallel requests yet" 2024-06-09 12:53:02 time=2024-06-09T04:53:02.553Z level=INFO source=server.go:338 msg="starting llama server" cmd="/tmp/ollama2402259148/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-010ec3ba94cb5ad2d9c8f95f46f01c6d80f83deab9df0a0831334ea45afff3e2 --ctx-size 2048 --batch-size 512 --embedding --log-disable --mmproj /root/.ollama/models/blobs/sha256-391d11736c3cd24a90417c47b0c88975e86918fcddb1b00494c4d715b08af13e --parallel 1 --port 35773" 2024-06-09 12:53:02 time=2024-06-09T04:53:02.553Z level=INFO source=sched.go:338 msg="loaded runners" count=1 2024-06-09 12:53:02 time=2024-06-09T04:53:02.554Z level=INFO source=server.go:525 msg="waiting for llama runner to start responding" 2024-06-09 12:53:02 time=2024-06-09T04:53:02.554Z level=INFO source=server.go:562 msg="waiting for server to become available" status="llm server error" 2024-06-09 12:53:02 time=2024-06-09T04:53:02.805Z level=INFO source=server.go:562 msg="waiting for server to become available" status="llm server loading model" 2024-06-09 12:53:03 llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-010ec3ba94cb5ad2d9c8f95f46f01c6d80f83deab9df0a0831334ea45afff3e2 (version GGUF V3 (latest)) 2024-06-09 12:53:03 llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. 2024-06-09 12:53:03 llama_model_loader: - kv 0: general.architecture str = llama 2024-06-09 12:53:03 llama_model_loader: - kv 1: general.name str = model 2024-06-09 12:53:03 llama_model_loader: - kv 2: llama.vocab_size u32 = 128256 2024-06-09 12:53:03 llama_model_loader: - kv 3: llama.context_length u32 = 8192 2024-06-09 12:53:03 llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 2024-06-09 12:53:03 llama_model_loader: - kv 5: llama.block_count u32 = 32 2024-06-09 12:53:03 llama_model_loader: - kv 6: llama.feed_forward_length u32 = 14336 2024-06-09 12:53:03 llama_model_loader: - kv 7: llama.rope.dimension_count u32 = 128 2024-06-09 12:53:03 llama_model_loader: - kv 8: llama.attention.head_count u32 = 32 2024-06-09 12:53:03 llama_model_loader: - kv 9: llama.attention.head_count_kv u32 = 8 2024-06-09 12:53:03 llama_model_loader: - kv 10: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 2024-06-09 12:53:03 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 2024-06-09 12:53:03 llama_model_loader: - kv 12: general.file_type u32 = 15 2024-06-09 12:53:03 llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 2024-06-09 12:53:03 llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... 2024-06-09 12:53:03 llama_model_loader: - kv 15: tokenizer.ggml.scores arr[f32,128256] = [0.000000, 0.000000, 0.000000, 0.0000... 2024-06-09 12:53:03 llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... 2024-06-09 12:53:04 llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... 2024-06-09 12:53:04 llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000 2024-06-09 12:53:04 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128001 2024-06-09 12:53:04 llama_model_loader: - kv 20: tokenizer.ggml.unknown_token_id u32 = 128002 2024-06-09 12:53:04 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 0 2024-06-09 12:53:04 llama_model_loader: - kv 22: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... 2024-06-09 12:53:04 llama_model_loader: - kv 23: general.quantization_version u32 = 2 2024-06-09 12:53:04 llama_model_loader: - type f32: 65 tensors 2024-06-09 12:53:04 llama_model_loader: - type q4_K: 193 tensors 2024-06-09 12:53:04 llama_model_loader: - type q6_K: 33 tensors 2024-06-09 12:53:04 llm_load_vocab: missing pre-tokenizer type, using: 'default' 2024-06-09 12:53:04 llm_load_vocab:
2024-06-09 12:53:04 llm_load_vocab: ****
2024-06-09 12:53:04 llm_load_vocab: GENERATION QUALITY WILL BE DEGRADED!
2024-06-09 12:53:04 llm_load_vocab: CONSIDER REGENERATING THE MODEL
2024-06-09 12:53:04 llm_load_vocab: ****
2024-06-09 12:53:04 llm_load_vocab:
2024-06-09 12:53:04 llm_load_vocab: special tokens definition check successful ( 256/128256 ). 2024-06-09 12:53:04 llm_load_print_meta: format = GGUF V3 (latest) 2024-06-09 12:53:04 llm_load_print_meta: arch = llama 2024-06-09 12:53:04 llm_load_print_meta: vocab type = BPE 2024-06-09 12:53:04 llm_load_print_meta: n_vocab = 128256 2024-06-09 12:53:04 llm_load_print_meta: n_merges = 280147 2024-06-09 12:53:04 llm_load_print_meta: n_ctx_train = 8192 2024-06-09 12:53:04 llm_load_print_meta: n_embd = 4096 2024-06-09 12:53:04 llm_load_print_meta: n_head = 32 2024-06-09 12:53:04 llm_load_print_meta: n_head_kv = 8 2024-06-09 12:53:04 llm_load_print_meta: n_layer = 32 2024-06-09 12:53:04 llm_load_print_meta: n_rot = 128 2024-06-09 12:53:04 llm_load_print_meta: n_embd_head_k = 128 2024-06-09 12:53:04 llm_load_print_meta: n_embd_head_v = 128