Open laurentperez opened 3 weeks ago
You may be fighting an uphill battle against the model. Since you're using this for completion I'm guessing you're using the 8B model for responsiveness. You might not be able to consistently get the output you want even by being more specific in the prompting.
@laurentperez systemMessage does not apply to autocomplete, as the prompt has to be very specific and doesn't have room for modification.
Unfortunately llama 3 is intended as a chat model rather than autocomplete, so you shouldn't expect great results. I would recommend trying deepseek-coder:6.7b instead https://ollama.com/library/deepseek-coder
yep I understood when reading the template selector https://github.com/continuedev/continue/blob/main/core/autocomplete/templates.ts#L314
"llama" will not match the includes, "deepseek" will and obv. as @sestinj pointed out, llama is intended as a chat model anyway. GH copilot autocompletion uses Codex but with a highly specific autocompletion prompt too.
I'll report and close the issue when I test deepseek or another model intended for completion.
Before submitting your bug report
Relevant environment info
Description
To reproduce
what happens :
how could I set a custom
systemMessage
for the tab completion ? I'd like something similar to this : "just output the code, be concise".setting
systemMessage
intabAutocompleteModel
seems to have no effect at all.is it even the right scenario here, or instead of llama3.1 should I just use starcoder ? I can't use codestral, I don't want my prompts to be sent to Mistral.
Log output
No response