Open surajyadav91 opened 1 week ago
This sounds like you've exceeded the context buffer and the value is the number of tokens that were processed in the last slot window. Try adding "num_ctx":60000
to the options
in the ollama.chat()
call. Note that this will increase the amount of VRAM required and depending on your hardware, may push some of the model off the GPU and in to system RAM for CPU inference.
This sounds like you've exceeded the context buffer and the value is the number of tokens that were processed in the last slot window. Try adding
"num_ctx":60000
to theoptions
in theollama.chat()
call. Note that this will increase the amount of VRAM required and depending on your hardware, may push some of the model off the GPU and in to system RAM for CPU inference.
thanks for pointing this out.
I didn't earlier notice this option here https://github.com/ollama/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values
I can see the default is 2048, so why in my case is it maximally reaching till 1026? even if consider option num_predict
for which default value is 128, still maximum value should have been more than 1026. Is this explained in detail somewhere, with examples?
Also, by default num_ctx
should have been set to model's max context length.
Context buffer is expensive in VRAM cost which grows quadratically on length. I mentioned pushing layers off to CPU above, if that happens inference speed drops dramatically, so the default value is meant to preserve performance. If the user wants a larger context, it can be extended with num_ctx
in the API call or by creating a customized model with PARAMETER num_ctx xxx
in the Modelfile.
Flash attention can reduce the VRAM cost, but it doesn't work for all models.
What is the issue?
Inconsistent
prompt_eval_count
for Large Prompts in Ollama Python LibraryFor larger prompts, when using the Ollama Python library with the
llama3.1:8b-instruct-fp16
model, theprompt_eval_count
remains constant at fixed value (1026) tokens, even when the input prompt size varies significantly. This behavior is observed when using theollama.chat()
method.Sample output:
Tokens: (1026, 15, 1041) Total_prompt_length: 57788
Tokens: (1026, 20, 1046) Total_prompt_length: 57172
Tokens: (1026, 18, 1044) Total_prompt_length: 57744
Current Behavior
prompt_eval_count
consistently returns same value (1026), regardless of the actual prompt length.eval_count
(output tokens) varies as expected. (this might also give fixed value once larger text is generated )Expected Behavior
prompt_eval_count
should accurately reflect the number of tokens in the input prompt.OS
macOS
GPU
Apple
CPU
Apple
Ollama version
0.3.9