continuedev / continue

⏩ Continue is the leading open-source AI code assistant. You can connect any models and any context to build custom autocomplete and chat experiences inside VS Code and JetBrains
https://docs.continue.dev/
Apache License 2.0
16.06k stars 1.23k forks source link

(intellij) tab completion outputs too verbose completion "it looks like you are coding(...)" instead of actual code #2104

Open laurentperez opened 3 weeks ago

laurentperez commented 3 weeks ago

Before submitting your bug report

Relevant environment info

- OS:Ubuntu 22.04.4 LTS
- Continue:0.0.62
- IDE:idea
- Model:llama3.1:latest
- config.json:

(...)
  "tabAutocompleteModel": {
    "title": "Tab",
    "provider": "openai",
      "model": "llama3.1:latest",
      "apiBase": "http://xxxx:11434"
  },
  "embeddingsProvider": {
    "provider": "ollama",
    "model": "nomic-embed-text" # using local ollama not remote
  }

Description

To reproduce

what happens :

how could I set a custom systemMessage for the tab completion ? I'd like something similar to this : "just output the code, be concise".

setting systemMessage in tabAutocompleteModel seems to have no effect at all.

is it even the right scenario here, or instead of llama3.1 should I just use starcoder ? I can't use codestral, I don't want my prompts to be sent to Mistral.

Log output

No response

bitbottrap commented 2 weeks ago

You may be fighting an uphill battle against the model. Since you're using this for completion I'm guessing you're using the 8B model for responsiveness. You might not be able to consistently get the output you want even by being more specific in the prompting.

sestinj commented 2 weeks ago

@laurentperez systemMessage does not apply to autocomplete, as the prompt has to be very specific and doesn't have room for modification.

Unfortunately llama 3 is intended as a chat model rather than autocomplete, so you shouldn't expect great results. I would recommend trying deepseek-coder:6.7b instead https://ollama.com/library/deepseek-coder

laurentperez commented 2 weeks ago

yep I understood when reading the template selector https://github.com/continuedev/continue/blob/main/core/autocomplete/templates.ts#L314

"llama" will not match the includes, "deepseek" will and obv. as @sestinj pointed out, llama is intended as a chat model anyway. GH copilot autocompletion uses Codex but with a highly specific autocompletion prompt too.

I'll report and close the issue when I test deepseek or another model intended for completion.