Some of the new llama3 models have extended rope max token size limits that have to be sent with the original http request to adjust.
would it be possible to add to the configfile a new parameter to set the max token to a integer value?
for example something like this:
llm:
api_type: "ollama" # or azure / ollama / open_llm etc. Check LLMType for more options
model: "dolphin-llama3:8b-256k-v2.9-q4_0" # or gpt-3.5-turbo-1106 / gpt-4-1106-preview
base_url: "http://snabox:11434/api" # or forward url / other llm url
num_token: 64000 # <--- new setting that gets sent with the HTTP request so ollama knows the max token size possible
Some of the new llama3 models have extended rope max token size limits that have to be sent with the original http request to adjust.
would it be possible to add to the configfile a new parameter to set the max token to a integer value?
for example something like this: