ollama / ollama-js

Ollama JavaScript library
https://ollama.ai
MIT License
1.56k stars 103 forks source link

Timeout for long generation #103

Open slavonnet opened 2 weeks ago

slavonnet commented 2 weeks ago

how do I set a timeout? If the generation is on the CPU (for example, Mixtral 8x22b), then it falls off after a timeout.

hopperelec commented 2 weeks ago

If you want to abort all requests (or, for simplicity if you will only ever have one ongoing request), you could use ollama.abort() like in this example https://github.com/ollama/ollama-js/blob/57fafae5d5e79e78f0c3abdcd2e18e7ff5fd1329/examples/abort/any-request.ts#L1-L27 except, in your case, I assume you have another request which comes after, so you would need to clearTimeout once it has finished generating to make sure it doesn't abort following requests.

If you want to abort a specific request, you could use AbortableAsyncIterator.abort() like in this example https://github.com/ollama/ollama-js/blob/57fafae5d5e79e78f0c3abdcd2e18e7ff5fd1329/examples/abort/specific-request.ts#L1-L31

slavonnet commented 2 weeks ago

No, I have the opposite situation. Due to the long execution of the inference on the CPU with "stream: false", the code does not wait for the end of the entire output. It turns out that only "stream: true" will help?

try {
   let response = await ollama.chat({
            stream: false,
            model: (model == '' ? defaultModel : model),
            options: {
                num_ctx: 32768,
                num_thread: 32,
                temperature: 0.0
            },
            messages:message
        });

        if (response.message.content.length > 0) {
            return {content: response.message.content};
        } else {
            return {content: "Error. Empty"};
        }
    } catch (e) {
        // @ts-ignore
        return {content: "Error.\n" + e.error};
    }

and get

Error.
undefined

journalctl:

июн 16 17:13:56 ai ollama[3619160]: time=2024-06-16T17:13:56.212+03:00 level=INFO source=memory.go:255 msg="offload to gpu" layers.requested=-1 layers.model=57 layers.offload=7 layers.split="" memory.available="[15.6 GiB]" memory.required.full="86.0 GiB" memory.required.partial="15.1 GiB" memory.required.kv="7.0 GiB" memory.required.allocations="[15.1 GiB]" memory.weights.total="80.8 GiB" memory.weights.repeating="80.6 GiB" memory.weights.nonrepeating="157.5 MiB" memory.graph.full="3.1 GiB" memory.graph.partial="3.1 GiB"
июн 16 17:13:56 ai ollama[3619160]: time=2024-06-16T17:13:56.217+03:00 level=INFO source=memory.go:255 msg="offload to gpu" layers.requested=-1 layers.model=57 layers.offload=7 layers.split="" memory.available="[15.6 GiB]" memory.required.full="86.0 GiB" memory.required.partial="15.1 GiB" memory.required.kv="7.0 GiB" memory.required.allocations="[15.1 GiB]" memory.weights.total="80.8 GiB" memory.weights.repeating="80.6 GiB" memory.weights.nonrepeating="157.5 MiB" memory.graph.full="3.1 GiB" memory.graph.partial="3.1 GiB"
июн 16 17:13:56 ai ollama[3619160]: time=2024-06-16T17:13:56.222+03:00 level=INFO source=memory.go:255 msg="offload to gpu" layers.requested=-1 layers.model=57 layers.offload=7 layers.split="" memory.available="[15.6 GiB]" memory.required.full="86.0 GiB" memory.required.partial="15.1 GiB" memory.required.kv="7.0 GiB" memory.required.allocations="[15.1 GiB]" memory.weights.total="80.8 GiB" memory.weights.repeating="80.6 GiB" memory.weights.nonrepeating="157.5 MiB" memory.graph.full="3.1 GiB" memory.graph.partial="3.1 GiB"
июн 16 17:13:56 ai ollama[3619160]: time=2024-06-16T17:13:56.225+03:00 level=INFO source=memory.go:255 msg="offload to gpu" layers.requested=-1 layers.model=57 layers.offload=21 layers.split=7,7,7 memory.available="[15.6 GiB 15.6 GiB 15.6 GiB]" memory.required.full="96.0 GiB" memory.required.partial="45.4 GiB" memory.required.kv="7.0 GiB" memory.required.allocations="[15.1 GiB 15.1 GiB 15.1 GiB]" memory.weights.total="80.8 GiB" memory.weights.repeating="80.6 GiB" memory.weights.nonrepeating="157.5 MiB" memory.graph.full="3.1 GiB" memory.graph.partial="3.1 GiB"
июн 16 17:13:56 ai ollama[3619160]: time=2024-06-16T17:13:56.227+03:00 level=INFO source=memory.go:255 msg="offload to gpu" layers.requested=-1 layers.model=57 layers.offload=21 layers.split=7,7,7 memory.available="[15.6 GiB 15.6 GiB 15.6 GiB]" memory.required.full="96.0 GiB" memory.required.partial="45.4 GiB" memory.required.kv="7.0 GiB" memory.required.allocations="[15.1 GiB 15.1 GiB 15.1 GiB]" memory.weights.total="80.8 GiB" memory.weights.repeating="80.6 GiB" memory.weights.nonrepeating="157.5 MiB" memory.graph.full="3.1 GiB" memory.graph.partial="3.1 GiB"
июн 16 17:13:56 ai ollama[3619160]: time=2024-06-16T17:13:56.229+03:00 level=INFO source=server.go:356 msg="starting llama server" cmd="/tmp/ollama3272230688/runners/cuda_v11/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-d0eeef8264ce10a7e578789ee69986c66425639e72c9855e36a0345c230918c9 --ctx-size 32768 --batch-size 512 --embedding --log-disable --n-gpu-layers 21 --threads 32 --flash-attn --parallel 1 --tensor-split 7,7,7 --tensor-split 7,7,7 --port 43129"
июн 16 17:13:56 ai ollama[3619160]: time=2024-06-16T17:13:56.230+03:00 level=INFO source=sched.go:382 msg="loaded runners" count=1
июн 16 17:13:56 ai ollama[3619160]: time=2024-06-16T17:13:56.230+03:00 level=INFO source=server.go:544 msg="waiting for llama runner to start responding"
июн 16 17:13:56 ai ollama[3619160]: time=2024-06-16T17:13:56.231+03:00 level=INFO source=server.go:582 msg="waiting for server to become available" status="llm server error"
июн 16 17:13:56 ai ollama[3672661]: INFO [main] build info | build=1 commit="5921b8f" tid="124068831891456" timestamp=1718547236
июн 16 17:13:56 ai ollama[3672661]: INFO [main] system info | n_threads=32 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="124068831891456" timestamp=1718547236 total_threads=32
июн 16 17:13:56 ai ollama[3672661]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="31" port="43129" tid="124068831891456" timestamp=1718547236
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: loaded meta data with 28 key-value pairs and 563 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-d0eeef8264ce10a7e578789ee69986c66425639e72c9855e36a0345c230918c9 (version GGUF V3 (latest))
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - kv   0:                       general.architecture str              = llama
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - kv   1:                               general.name str              = Mixtral-8x22B-Instruct-v0.1
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - kv   2:                          llama.block_count u32              = 56
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - kv   3:                       llama.context_length u32              = 65536
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - kv   4:                     llama.embedding_length u32              = 6144
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 16384
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 48
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 8
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - kv   8:                       llama.rope.freq_base f32              = 1000000.000000
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - kv  10:                         llama.expert_count u32              = 8
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - kv  11:                    llama.expert_used_count u32              = 2
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - kv  12:                          general.file_type u32              = 2
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - kv  13:                           llama.vocab_size u32              = 32768
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - kv  14:                 llama.rope.dimension_count u32              = 128
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = llama
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,32768]   = ["<unk>", "<s>", "</s>", "[INST]", "[...
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - kv  17:                      tokenizer.ggml.scores arr[f32,32768]   = [-1000.000000, -1000.000000, -1000.00...
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,32768]   = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 1
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 2
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - kv  21:            tokenizer.ggml.unknown_token_id u32              = 0
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - kv  24:           tokenizer.chat_template.tool_use str              = {{bos_token}}{% set user_messages = m...
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - kv  25:                   tokenizer.chat_templates arr[str,1]       = ["tool_use"]
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - kv  26:                    tokenizer.chat_template str              = {{bos_token}}{% for message in messag...
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - kv  27:               general.quantization_version u32              = 2
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - type  f32:  113 tensors
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - type  f16:   56 tensors
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - type q4_0:  281 tensors
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - type q8_0:  112 tensors
июн 16 17:13:56 ai ollama[3619160]: llama_model_loader: - type q6_K:    1 tensors
июн 16 17:13:56 ai ollama[3619160]: llm_load_vocab: special tokens cache size = 259
июн 16 17:13:56 ai ollama[3619160]: llm_load_vocab: token to piece cache size = 0.3464 MB
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: format           = GGUF V3 (latest)
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: arch             = llama
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: vocab type       = SPM
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: n_vocab          = 32768
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: n_merges         = 0
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: n_ctx_train      = 65536
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: n_embd           = 6144
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: n_head           = 48
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: n_head_kv        = 8
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: n_layer          = 56
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: n_rot            = 128
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: n_embd_head_k    = 128
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: n_embd_head_v    = 128
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: n_gqa            = 6
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: n_embd_k_gqa     = 1024
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: n_embd_v_gqa     = 1024
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: f_norm_eps       = 0.0e+00
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: f_logit_scale    = 0.0e+00
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: n_ff             = 16384
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: n_expert         = 8
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: n_expert_used    = 2
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: causal attn      = 1
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: pooling type     = 0
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: rope type        = 0
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: rope scaling     = linear
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: freq_base_train  = 1000000.0
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: freq_scale_train = 1
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: n_yarn_orig_ctx  = 65536
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: rope_finetuned   = unknown
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: ssm_d_conv       = 0
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: ssm_d_inner      = 0
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: ssm_d_state      = 0
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: ssm_dt_rank      = 0
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: model type       = 8x22B
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: model ftype      = Q4_0
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: model params     = 140.63 B
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: model size       = 74.05 GiB (4.52 BPW)
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: general.name     = Mixtral-8x22B-Instruct-v0.1
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: BOS token        = 1 '<s>'
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: EOS token        = 2 '</s>'
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: UNK token        = 0 '<unk>'
июн 16 17:13:56 ai ollama[3619160]: llm_load_print_meta: LF token         = 781 '<0x0A>'
июн 16 17:13:56 ai ollama[3619160]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   yes
июн 16 17:13:56 ai ollama[3619160]: ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
июн 16 17:13:56 ai ollama[3619160]: ggml_cuda_init: found 3 CUDA devices:
июн 16 17:13:56 ai ollama[3619160]:   Device 0: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes
июн 16 17:13:56 ai ollama[3619160]:   Device 1: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes
июн 16 17:13:56 ai ollama[3619160]:   Device 2: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes
июн 16 17:13:56 ai ollama[3619160]: llm_load_tensors: ggml ctx size =    1.12 MiB
июн 16 17:13:56 ai ollama[3619160]: time=2024-06-16T17:13:56.482+03:00 level=INFO source=server.go:582 msg="waiting for server to become available" status="llm server loading model"
июн 16 17:13:57 ai ollama[3619160]: time=2024-06-16T17:13:57.939+03:00 level=INFO source=server.go:582 msg="waiting for server to become available" status="llm server not responding"
июн 16 17:13:59 ai ollama[3619160]: time=2024-06-16T17:13:59.996+03:00 level=INFO source=server.go:582 msg="waiting for server to become available" status="llm server loading model"
июн 16 17:14:15 ai ollama[3619160]: llm_load_tensors: offloading 21 repeating layers to GPU
июн 16 17:14:15 ai ollama[3619160]: llm_load_tensors: offloaded 21/57 layers to GPU
июн 16 17:14:15 ai ollama[3619160]: llm_load_tensors:        CPU buffer size = 75831.40 MiB
июн 16 17:14:15 ai ollama[3619160]: llm_load_tensors:      CUDA0 buffer size =  9445.73 MiB
июн 16 17:14:15 ai ollama[3619160]: llm_load_tensors:      CUDA1 buffer size =  9445.73 MiB
июн 16 17:14:15 ai ollama[3619160]: llm_load_tensors:      CUDA2 buffer size =  9445.73 MiB
июн 16 17:14:23 ai ollama[3619160]: llama_new_context_with_model: n_ctx      = 32768
июн 16 17:14:23 ai ollama[3619160]: llama_new_context_with_model: n_batch    = 512
июн 16 17:14:23 ai ollama[3619160]: llama_new_context_with_model: n_ubatch   = 512
июн 16 17:14:23 ai ollama[3619160]: llama_new_context_with_model: flash_attn = 1
июн 16 17:14:23 ai ollama[3619160]: llama_new_context_with_model: freq_base  = 1000000.0
июн 16 17:14:23 ai ollama[3619160]: llama_new_context_with_model: freq_scale = 1
июн 16 17:14:24 ai ollama[3619160]: time=2024-06-16T17:14:24.092+03:00 level=INFO source=server.go:582 msg="waiting for server to become available" status="llm server not responding"
июн 16 17:14:26 ai ollama[3619160]: time=2024-06-16T17:14:26.150+03:00 level=INFO source=server.go:582 msg="waiting for server to become available" status="llm server loading model"
июн 16 17:14:27 ai ollama[3619160]: llama_kv_cache_init:  CUDA_Host KV buffer size =  4480.00 MiB
июн 16 17:14:27 ai ollama[3619160]: llama_kv_cache_init:      CUDA0 KV buffer size =   896.00 MiB
июн 16 17:14:27 ai ollama[3619160]: llama_kv_cache_init:      CUDA1 KV buffer size =   896.00 MiB
июн 16 17:14:27 ai ollama[3619160]: llama_kv_cache_init:      CUDA2 KV buffer size =   896.00 MiB
июн 16 17:14:27 ai ollama[3619160]: llama_new_context_with_model: KV self size  = 7168.00 MiB, K (f16): 3584.00 MiB, V (f16): 3584.00 MiB
июн 16 17:14:27 ai ollama[3619160]: llama_new_context_with_model:  CUDA_Host  output buffer size =     0.15 MiB
июн 16 17:14:27 ai ollama[3619160]: llama_new_context_with_model:      CUDA0 compute buffer size =   700.25 MiB
июн 16 17:14:27 ai ollama[3619160]: llama_new_context_with_model:      CUDA1 compute buffer size =   184.03 MiB
июн 16 17:14:27 ai ollama[3619160]: llama_new_context_with_model:      CUDA2 compute buffer size =   184.04 MiB
июн 16 17:14:27 ai ollama[3619160]: llama_new_context_with_model:  CUDA_Host compute buffer size =    76.01 MiB
июн 16 17:14:27 ai ollama[3619160]: llama_new_context_with_model: graph nodes  = 2415
июн 16 17:14:27 ai ollama[3619160]: llama_new_context_with_model: graph splits = 426
июн 16 17:14:35 ai ollama[3672661]: INFO [main] model loaded | tid="124068831891456" timestamp=1718547275
июн 16 17:14:36 ai ollama[3619160]: time=2024-06-16T17:14:36.187+03:00 level=INFO source=server.go:582 msg="waiting for server to become available" status="llm server not responding"
июн 16 17:14:36 ai ollama[3619160]: time=2024-06-16T17:14:36.443+03:00 level=INFO source=server.go:587 msg="llama runner started in 40.21 seconds"

here inference and timeout. Log after undefined error:

июн 16 17:18:56 ai ollama[3619160]: [GIN] 2024/06/16 - 17:18:56 | 500 |          5m1s |       127.0.0.1 | POST     "/api/chat"

looks like server have 5 min timeout

hopperelec commented 2 weeks ago

Oh well, by default, Ollama removes the model from memory after 5 minutes, so that could be what's causing this. See the Ollama FAQ for more information. I would have imagined that this time only elapsed after it finished generating and I'm not sure why streaming the response would fix this, but in case this is the issue then you could try increasing (or preventing, by setting it to -1) the time by setting the keep_alive option

slavonnet commented 2 weeks ago

keep_alive: "15m" didn't help. OLLAMA_KEEP_ALIVE=15m also didn't help.

This parameter should indicate how long after the inference he should keep the model for repeated inference following the description

Any way 5m1s

июн 16 17:43:34 ai ollama[3733311]: [GIN] 2024/06/16 - 17:43:34 | 500 |          5m1s |       127.0.0.1 | POST     "/api/chat"