Open vrgimael opened 1 month ago
Your settings look correct. Please could you share the debug output? Many thanks.
I ran into the same issue. Every code completion is absolutely, completely, positively, off-point. To the degree that i immediately turned it off and said "NOPE". It was worse than continue's completion.
Does the same on my mac and on ubuntu.
nav ul li { display: inline;
margin-right: 10px;
}
Debug
--------------------------------------------
[Extension Host] [twinny] ***Twinny Stream Debug***
Streaming response from 192.168.50.44:11434.
Request body:
{
"model": "qwen2.5-coder:1.5b",
"prompt": "<PRE>/**/ \n\n/* Language: CSS (css) */\n/* File uri: file:///home/joseph/ai/my-vscode-extension/webview/src/styles.css (css) */\nbody {\n font-family: Arial, sans-serif;\n margin: 0;\n padding: 0;\n background-color: #333;\n}\n\nnav {\n background-color: #6b654a;\n color: rgb(226, 227, 194);\n padding: 10px;\n}\n\nnav ul {\n list-style-type: none;\n padding: 0;\n}\n\nnav ul li {\n display: inline;\n mar <SUF> \n margin-right: 10px;\n}\n\nnav ul li a {\n color: rgb(226, 227, 194);\n font-weight: bolder;\n /* text-decoration: none; */\n}\n\n#content {\n padding: 20px;\n}\n <MID>",
"stream": true,
"keep_alive": "5m",
"options": {
"temperature": 0.2,
"num_predict": 512
}
}
Request options:
{
"hostname": "123.123.123.123",
"port": 11434,
"path": "/api/generate",
"protocol": "http",
"method": "POST",
"headers": {
"Content-Type": "application/json",
"Authorization": ""
}
}
[Extension Host] [twinny] Streaming response end due to multiline not required 22
Completion: ```
Hey @atljoseph thanks for the report, the reason you are getting bad output is because you're using an instruct model for FIM completions...You should use a base model instead.
Based on your debug output you are using qwen 1.5b therefore I'd recommend https://ollama.com/library/qwen2.5-coder:1.5b-base for you. FYI: I have not tested this model myself so I cannot guarantee it's accuracy, however the 7b works for me perfectly.
Please refer to the documentation for more supported models using FIM:
https://twinnydotdev.github.io/twinny-docs/general/supported-models/
Many thanks,
Describe the bug The autocomplete suggestions returned are always kinda nonsense.
To Reproduce I just tried to use the VSCode extension with Ollama using the proposed model in the tutorial.
Expected behavior I expected something that would at the very least make syntactic sense, but the output is always broken as if the prompt is incorrect (however I looked at the code for the templates and logged info and it seems fine).
Screenshots
API Provider Ollama
Chat or Auto Complete? Autocomplete (haven't really tested the chat)
Model Name codellama:7b-code, qwen2.5-coder:7b-base, stable-code:3b-code, deepseek-coder:6.7b-base
Desktop (please complete the following information):
Additional context I also tried the config with multiple different templates with similar but different broken results.
Please let me know if any further information is needed.