continuedev / continue

⏩ Continue is the leading open-source AI code assistant. You can connect any models and any context to build custom autocomplete and chat experiences inside VS Code and JetBrains
https://docs.continue.dev/
Apache License 2.0
18.58k stars 1.55k forks source link

qwen2.5-coder-7b autocomplete not working #2616

Open xiaolucy1 opened 6 days ago

xiaolucy1 commented 6 days ago

Before submitting your bug report

Relevant environment info

- OS: windows
- Continue version: 0.9.217
- IDE version: vscode 1.93.1
- Model: qwen2.5-coder-7b
- config.json:

     "tabAutocompleteModel": {
        "title": "qwen",
        "provider": "openai",
        "model": "qwen2.5-coder-7b",
        "apiKey": "xxxx",
        "apiBase": "http://xxxx/v1"
  },
  "tabAutocompleteOptions": {
     "debounceDelay": 100,
     "maxPromptTokens": 512,
     "multilineCompletions": "always",
     "disableInFiles": ["*.md"]
  },

qwen2.5-coder-7b is deployed by vllm.

Description

continue doesn't autocomplete the code, check the prompt log, the completion is not code

To reproduce

No response

Log output

##### Completion options #####
{
  "contextLength": 8096,
  "model": "qwen2.5-coder-7b",
  "maxTokens": 2048,
  "temperature": 0.01,
  "stop": [
    "<|endoftext|>",
    "<|fim_prefix|>",
    "<|fim_middle|>",
    "<|fim_suffix|>",
    "<|fim_pad|>",
    "<|repo_name|>",
    "<|file_sep|>",
    "<|im_start|>",
    "<|im_end|>",
    "\n\n",
    "\r\n\r\n",
    "/src/",
    "#- coding: utf-8",
    "",
    "\ndef",
    "\nclass",
    "\n\"\"\"#"
  ],
  "raw": true
}

##### Prompt #####
<|fim_prefix|>
def main():
    print("hello world")
    print<|fim_suffix|>

    <|fim_middle|>==========================================================================
==========================================================================
Completion:

 It looks like your code is incomplete. Here's a more complete version of the `main` function that prints "hello world" and then prints a newline:
d-r-e commented 2 hours ago

You can set "vllm" as a provider instead of openai. It might not be the problem though.