twinnydotdev / twinny

The most no-nonsense, locally or API-hosted AI code completion plugin for Visual Studio Code - like GitHub Copilot but completely free and 100% private.
https://twinny.dev
MIT License
3.09k stars 165 forks source link

Code completion broken (nonsense output) #338

Open vrgimael opened 1 month ago

vrgimael commented 1 month ago

Describe the bug The autocomplete suggestions returned are always kinda nonsense.

To Reproduce I just tried to use the VSCode extension with Ollama using the proposed model in the tutorial.

Expected behavior I expected something that would at the very least make syntactic sense, but the output is always broken as if the prompt is incorrect (however I looked at the code for the templates and logged info and it seems fine).

Screenshots

Screenshot 2024-10-02 at 21 11 17 Screenshot 2024-10-02 at 21 11 33

API Provider Ollama

Chat or Auto Complete? Autocomplete (haven't really tested the chat)

Model Name codellama:7b-code, qwen2.5-coder:7b-base, stable-code:3b-code, deepseek-coder:6.7b-base

Desktop (please complete the following information):

Additional context I also tried the config with multiple different templates with similar but different broken results.

Please let me know if any further information is needed.

rjmacarthy commented 1 month ago

Your settings look correct. Please could you share the debug output? Many thanks.

atljoseph commented 3 weeks ago

I ran into the same issue. Every code completion is absolutely, completely, positively, off-point. To the degree that i immediately turned it off and said "NOPE". It was worse than continue's completion.

Does the same on my mac and on ubuntu.

In Styles.Css:

nav ul li { display: inline;


    margin-right: 10px;
}

Debug
--------------------------------------------
[Extension Host] [twinny] ***Twinny Stream Debug***
    Streaming response from 192.168.50.44:11434.
    Request body:
    {
  "model": "qwen2.5-coder:1.5b",
  "prompt": "<PRE>/**/ \n\n/* Language: CSS (css) */\n/* File uri: file:///home/joseph/ai/my-vscode-extension/webview/src/styles.css (css) */\nbody {\n    font-family: Arial, sans-serif;\n    margin: 0;\n    padding: 0;\n    background-color: #333;\n}\n\nnav {\n    background-color: #6b654a;\n    color: rgb(226, 227, 194);\n    padding: 10px;\n}\n\nnav ul {\n    list-style-type: none;\n    padding: 0;\n}\n\nnav ul li {\n    display: inline;\n    mar <SUF> \n    margin-right: 10px;\n}\n\nnav ul li a {\n    color: rgb(226, 227, 194);\n    font-weight: bolder;\n    /* text-decoration: none; */\n}\n\n#content {\n    padding: 20px;\n}\n <MID>",
  "stream": true,
  "keep_alive": "5m",
  "options": {
    "temperature": 0.2,
    "num_predict": 512
  }
}

    Request options:
    {
  "hostname": "123.123.123.123",
  "port": 11434,
  "path": "/api/generate",
  "protocol": "http",
  "method": "POST",
  "headers": {
    "Content-Type": "application/json",
    "Authorization": ""
  }
}
[Extension Host] [twinny] Streaming response end due to multiline not required  22 
Completion: ```
rjmacarthy commented 3 weeks ago

Hey @atljoseph thanks for the report, the reason you are getting bad output is because you're using an instruct model for FIM completions...You should use a base model instead.

Based on your debug output you are using qwen 1.5b therefore I'd recommend https://ollama.com/library/qwen2.5-coder:1.5b-base for you. FYI: I have not tested this model myself so I cannot guarantee it's accuracy, however the 7b works for me perfectly.

Please refer to the documentation for more supported models using FIM:

https://twinnydotdev.github.io/twinny-docs/general/supported-models/

Many thanks,