Closed jgilfoil closed 1 month ago
Hello, I cannot find anywhere in the code where the property option
is passed to the server are you sure it's not something else?
hmm, not really sure where else it would be coming from. At the time i collected these logs, there shouldn't have been anything else hitting the ollama api. Seeing as it seems to be working in most cases, i think we can close this out, thanks for looking into it.
Running into a similar issue:
[GIN] 2024/06/01 - 14:47:55 | 200 | 639.312645ms | 172.17.0.1 | POST "/v1/chat/completions"
time=2024-06-01T14:47:59.879Z level=WARN source=types.go:384 msg="invalid option provided" option=""
I get same error via manual POST via python requests lib for /api/generate. I pass options
as a dictionary. weird.
Has anyone figured this out? Happens to me too. This is the dictionary that's sent:
data = {
"model": model,
"messages": [{
"role": "user",
"content": message
}],
"stream": False,
"options": {
"max_tokens": max_tokens,
"temperature": temperature
}
}
Both max_tokens
and temperature
are never None
This is the log entry:
time=2024-06-25T20:13:55.877Z level=WARN source=types.go:430 msg="invalid option provided" option=""
[GIN] 2024/06/25 - 20:13:56 | 400 | 130.044154ms | 172.16.16.35 | POST "/api/chat"
Describe the bug I'm seeing this error in my ollama server.log with every auto-complete request from twinny. Auto complete does appear to be working and giving valid completion suggestions, but i'm confused as to why it's generating this error as the options seem to be submitted properly.
To Reproduce paste this short script. wait for autocomplete request
Expected behavior Just trying to determine if Twinny is leveraging the options properly or if the errors mean
options: {}
is being discarded entirely.Logging
API Provider ollama -v Warning: could not connect to a running Ollama instance Warning: client version is 0.1.38
Chat or Auto Complete? auto complete
Model Name codellama:7b-code-q4_0
Desktop (please complete the following information):
Additional context Both Ollama and vscode are running from windows 10, though i have tried this with vscode using a remote linux container and got the same result.
I tried resetting all twinny settings back to default (except for the host ip), as i'm using this in remote containers sometimes, so it needs to be network accessible.