twinnydotdev / twinny

The most no-nonsense, locally or API-hosted AI code completion plugin for Visual Studio Code - like GitHub Copilot but completely free and 100% private.
https://twinny.dev
MIT License
2.3k stars 126 forks source link

invalid option provided option="" #251

Closed jgilfoil closed 1 month ago

jgilfoil commented 1 month ago

Describe the bug I'm seeing this error in my ollama server.log with every auto-complete request from twinny. Auto complete does appear to be working and giving valid completion suggestions, but i'm confused as to why it's generating this error as the options seem to be submitted properly.

time=2024-05-18T20:38:10.530-06:00 level=INFO source=server.go:545 msg="llama runner started in 3.42 seconds"
[GIN] 2024/05/18 - 20:38:11 | 200 |    4.5533999s |    <ip redacted> | POST     "/api/generate"
time=2024-05-18T20:38:33.016-06:00 level=WARN source=types.go:384 msg="invalid option provided" option=""
[GIN] 2024/05/18 - 20:38:33 | 200 |    611.9007ms |    <ip redacted> | POST     "/api/generate"
time=2024-05-18T20:38:41.256-06:00 level=WARN source=types.go:384 msg="invalid option provided" option=""
[GIN] 2024/05/18 - 20:38:41 | 200 |     274.131ms |    <ip redacted> | POST     "/api/generate"
time=2024-05-18T20:38:59.734-06:00 level=WARN source=types.go:384 msg="invalid option provided" option=""
[GIN] 2024/05/18 - 20:38:59 | 200 |    170.3256ms |    <ip redacted> | POST     "/api/generate"
time=2024-05-18T20:39:00.653-06:00 level=WARN source=types.go:384 msg="invalid option provided" option=""
[GIN] 2024/05/18 - 20:39:00 | 200 |    297.9836ms |    <ip redacted> | POST     "/api/generate"
time=2024-05-18T20:39:01.752-06:00 level=WARN source=types.go:384 msg="invalid option provided" option=""
[GIN] 2024/05/18 - 20:39:02 | 200 |    293.5611ms |    <ip redacted> | POST     "/api/generate"
time=2024-05-18T20:39:02.723-06:00 level=WARN source=types.go:384 msg="invalid option provided" option=""
[GIN] 2024/05/18 - 20:39:03 | 200 |    337.9314ms |    <ip redacted> | POST     "/api/generate"

To Reproduce paste this short script. wait for autocomplete request

#!/bin/bash

# This script creates a backup of a directory and compresses it into a tar.gz file.

# Define the directory to backup
DIR_TO_BACKUP="/path/to/directory"

# Define the backup destination
BACKUP_DEST="/path/to/backup"

Expected behavior Just trying to determine if Twinny is leveraging the options properly or if the errors mean options: {} is being discarded entirely.

Logging

[Extension Host] 
***Twinny Stream Debug***
Streaming response from <ip redacted>:11434.
Request body:
{
  "model": "codellama:7b-code-q4_0",
  "prompt": "<PRE># 

# Language: Shell (shellscript) 
# File uri: untitled:Untitled-2 (shellscript) 
#!/bin/bash

# This script creates a backup of a directory and compresses it into a tar.gz file.

# Define the directory to backup
DIR_TO_BACKUP=\"/path/to/directory\"

# Define the backup destination
BACKUP_DEST=\"/path/to/backup\"

 <SUF>  <MID>",
  "stream": true,
  "keep_alive": "5m",
  "options": {
    "temperature": 0.2,
    "num_predict": 512
  }
}

Request options:
{
  "hostname": "<ip redacted>",
  "port": 11434,
  "path": "/api/generate",
  "protocol": "http",
  "method": "POST",
  "headers": {
    "Content-Type": "application/json",
    "Authorization": ""
  }
}

console.ts:137 [Extension Host] Streaming response end due to multiline not required  61 
Completion: # Define the name of the backup

console.ts:137 [Extension Host] *** Twinny completion triggered for file: untitled:Untitled-2 ***
      Original completion: # Define the name of the backup

      Formatted completion: # Define the name of the backup
      Max Lines: 30
      Use file context: true
      Completed lines count 1
      Using custom FIM template fim.bhs?: false

API Provider ollama -v Warning: could not connect to a running Ollama instance Warning: client version is 0.1.38

Chat or Auto Complete? auto complete

Model Name codellama:7b-code-q4_0

Desktop (please complete the following information):

Additional context Both Ollama and vscode are running from windows 10, though i have tried this with vscode using a remote linux container and got the same result.

I tried resetting all twinny settings back to default (except for the host ip), as i'm using this in remote containers sometimes, so it needs to be network accessible.

rjmacarthy commented 1 month ago

Hello, I cannot find anywhere in the code where the property option is passed to the server are you sure it's not something else?

jgilfoil commented 1 month ago

hmm, not really sure where else it would be coming from. At the time i collected these logs, there shouldn't have been anything else hitting the ollama api. Seeing as it seems to be working in most cases, i think we can close this out, thanks for looking into it.

Tabrizian commented 1 month ago

Running into a similar issue:

[GIN] 2024/06/01 - 14:47:55 | 200 |  639.312645ms |      172.17.0.1 | POST     "/v1/chat/completions"
time=2024-06-01T14:47:59.879Z level=WARN source=types.go:384 msg="invalid option provided" option=""
parrt commented 3 weeks ago

I get same error via manual POST via python requests lib for /api/generate. I pass options as a dictionary. weird.

valentinfrlch commented 2 weeks ago

Has anyone figured this out? Happens to me too. This is the dictionary that's sent:

data = {
     "model": model,
      "messages": [{
            "role": "user",
            "content": message
        }],
      "stream": False,
       "options": {
           "max_tokens": max_tokens,
            "temperature": temperature
            }
        }

Both max_tokens and temperature are never None

This is the log entry:

time=2024-06-25T20:13:55.877Z level=WARN source=types.go:430 msg="invalid option provided" option=""
[GIN] 2024/06/25 - 20:13:56 | 400 |  130.044154ms |    172.16.16.35 | POST     "/api/chat"