Closed eliranwong closed 6 days ago
Hi, please provide
aoutput of
a)
ollame serve
b)
ollama ls
Ollama serve output
[GIN] 2024/10/04 - 22:27:40 | 200 | 123.616µs | 127.0.0.1 | HEAD "/"[GIN] 2024/10/04 - 22:27:40 | 200 | 74.036011ms | 127.0.0.1 | POST "/api/show" ⠼ time=2024-10-04T22:27:40.975Z level=INFO source=server.go:103 msg="system memory" total="11.3 GiB" free="2.4 GiB" free_swap="3.0 GiB" time=2024-10-04T22:27:40.985Z level=INFO source=memory.go:326 msg="offload to cpu" layers.requested=-1 layers.model=17 layers.offload=0 layers.split="" memory.available="[2.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="2.1 GiB" memory.required.partial="0 B" memory.required.kv="256.0 MiB" memory.required.allocations="[2.1 GiB]" memory.weights.total="1.2 GiB" memory.weights.repeating="976.1 MiB" memory.weights.nonrepeating="266.2 MiB" memory.graph.full="544.0 MiB" memory.graph.partial="554.3 MiB" time=2024-10-04T22:27:40.997Z level=INFO source=server.go:388 msg="starting llama server" cmd="/data/data/com.termux/files/usr/tmp/ollama2311113242/runners/cpu/ollama_llama_server --model /data/data/com.termux/files/home/.ollama/models/blobs/sha256-74701a8c35f6c8d9a4b91f3f3497643001d63e0c7a84e085bed452548fa88d45 --ctx-size 8192 --batch-size 512 --embedding --log-disable --no-mmap --parallel 4 --port 46207" time=2024-10-04T22:27:41.000Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2024-10-04T22:27:41.000Z level=INFO source=server.go:587 msg="waiting for llama runner to start responding" time=2024-10-04T22:27:41.003Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=3670 commit="bc1735bf" tid="521371786520" timestamp=1728080861 INFO [main] system info | n_threads=9 n_threads_batch=9 system_info="AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 1 | SVE = 0 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="521371786520" timestamp=1728080861 total_threads=9 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="8" port="46207" tid="521371786520" timestamp=1728080861 ⠴ llama_model_loader: loaded meta data with 30 key-value pairs and 147 tensors from /data/data/com.termux/files/home/.ollama/models/blobs/sha256-74701a8c35f6c8d9a4b91f3f3497643001d63e0c7a84e085bed452548fa88d45 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Llama 3.2 1B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Llama-3.2 llama_model_loader: - kv 5: general.size_label str = 1B llama_model_loader: - kv 6: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 7: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 8: llama.block_count u32 = 16 llama_model_loader: - kv 9: llama.context_length u32 = 131072 llama_model_loader: - kv 10: llama.embedding_length u32 = 2048 llama_model_loader: - kv 11: llama.feed_forward_length u32 = 8192 llama_model_loader: - kv 12: llama.attention.head_count u32 = 32 llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 14: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 16: llama.attention.key_length u32 = 64 llama_model_loader: - kv 17: llama.attention.value_length u32 = 64 llama_model_loader: - kv 18: general.file_type u32 = 7 llama_model_loader: - kv 19: llama.vocab_size u32 = 128256 llama_model_loader: - kv 20: llama.rope.dimension_count u32 = 64 llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 22: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 28: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 34 tensors llama_model_loader: - type q8_0: 113 tensors ⠴ time=2024-10-04T22:27:41.258Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" ⠇ llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 2048 llm_load_print_meta: n_layer = 16 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8llm_load_print_meta: n_rot = 64 llm_load_print_meta: n_swa = 0llm_load_print_meta: n_embd_head_k = 64 llm_load_print_meta: n_embd_head_v = 64 llm_load_print_meta: n_gqa = 4llm_load_print_meta: n_embd_k_gqa = 512 llm_load_print_meta: n_embd_v_gqa = 512 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 8192 llm_load_print_meta: n_expert = 0llm_load_print_meta: n_expert_used = 0llm_load_print_meta: causal attn = 1llm_load_print_meta: pooling type = 0llm_load_print_meta: rope type = 0llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0llm_load_print_meta: ssm_d_inner = 0llm_load_print_meta: ssm_d_state = 0llm_load_print_meta: ssm_dt_rank = 0llm_load_print_meta: ssm_dt_b_c_rms = 0llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = Q8_0 llm_load_print_meta: model params = 1.24 B llm_load_print_meta: model size = 1.22 GiB (8.50 BPW) llm_load_print_meta: general.name = Llama 3.2 1B Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llm_load_tensors: ggml ctx size = 0.07 MiB llm_load_tensors: CPU buffer size = 1518.57 MiB ⠹ llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 ⠸ llama_kv_cache_init: CPU KV buffer size = 256.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: CPU output buffer size = 1.99 MiB llama_new_context_with_model: CPU compute buffer size = 544.01 MiB llama_new_context_with_model: graph nodes = 518 llama_new_context_with_model: graph splits = 1 ⠹ INFO [main] model loaded | tid="521371786520" timestamp=1728080864 ⠼ time=2024-10-04T22:27:45.048Z level=INFO source=server.go:626 msg="llama runner started in 4.05 seconds" [GIN] 2024/10/04 - 22:27:45 | 200 | 4.480423668s | 127.0.0.1 | POST "/api/generate" >>> hi How can I help you today?[GIN] 2024/10/04 - 22:27:49 | 200 | 2.025175375s | 127.0.0.1 | POST "/api/chat"
Ollama ps output
[GIN] 2024/10/04 - 22:27:56 | 200 | 275.961µs | 127.0.0.1 | HEAD "/"[GIN] 2024/10/04 - 22:27:56 | 200 | 139.73µs | 127.0.0.1 | GET "/api/ps" NAME ID SIZE PROCESSOR UNTIL llama3.2:1b baf6a787fdff 2.2 GB 100% CPU 4 minutes from now
Ollama ls output
[GIN] 2024/10/04 - 22:33:33 | 200 | 1.449219ms | 127.0.0.1 | HEAD "/"[GIN] 2024/10/04 - 22:33:33 | 200 | 16.681275ms | 127.0.0.1 | GET "/api/tags" NAME ID SIZE MODIFIED llama3.2:1b baf6a787fdff 1.3 GB 3 hours ago llama3.2:3b a80c4f17acd5 2.0 GB 12 hours ago llava:latest 8dd30f6b0cb1 4.7 GB 14 hours ago llama3.1:latest 42182419e950 4.7 GB 14 hours ago wizardlm2:7b c9b1aff820f2 4.1 GB 28 hours ago
I can't reproduce.
I followed your steps and I was able to see and select the model, see image.
Is Ollama vendor configured or skipped? Please paste the content of the ~/.config/fabric/.env
What happened?
I installed Ollama and Fabirc on Android via Termux. My notes: https://github.com/eliranwong/toolmate/blob/main/package/toolmate/docs/Termux%20Setup.md#instal-ollama-on-termux
When I run
fabric --setup
to select a default model, I do not see the Ollama models for selection.remarks: Ollama binary, in this case, is placed at /data/data/com.termux/files/usr/bin/
Version check
Relevant log output
No response
Relevant screenshots (optional)
No response