continuedev / continue

⏩ Continue is the leading open-source AI code assistant. You can connect any models and any context to build custom autocomplete and chat experiences inside VS Code and JetBrains
https://docs.continue.dev/
Apache License 2.0
19.25k stars 1.66k forks source link

Tab autocomplete not working with openai provider and Qwen 2.5 Coder #2907

Open carmelomigliore opened 3 days ago

carmelomigliore commented 3 days ago

Before submitting your bug report

Relevant environment info

- OS:Linux OpenSuse Tumbleweed
- Continue version: v0.9.228
- IDE version: VSCode 1.87
- Model: Qwen 2.5 Coder 3B
- {
  "models": [
    {
      "model": "qwen-2.5-coder",
      "provider": "openai",
      "apiKey": "xxx",
      "apiBase": "https://myserver/chat/v1",
      "title": "Qwen 2.5 Coder 32b",
      "useLegacyCompletionsEndpoint": false
    }
  ],
  "tabAutocompleteModel": {
    "title": "Qwen 2.5 Coder 3b",
    "provider": "openai",
    "model": "qwen",
    "apiKey": "xxx",
    "apiBase": "https://myserver/autocomplete/v1",
    "useLegacyCompletionsEndpoint": false
  },
...

Description

Hello, i'm unable to utilize tab autocomplete with openai provider and qwen 2.5 coder 3B. It seems to me that continue is not able to parse the answer (see the logs).

I am also utilizing Qwen-2.5-Coder-32B for chat and editing and it works fine.

To reproduce

Write a file with the following content:

#include <stdio.h>

int main (int argc, char** argv){
   if
}

While writing "if" it seems the llm correctly guessed what I was trying to do, but probably not in the format expected by continue

Log output

=========================================================================
==========================================================================
##### Completion options #####
{
  "contextLength": 8096,
  "maxTokens": 4096,
  "model": "qwen",
  "temperature": 0.01,
  "stop": [
    "<fim_prefix>",
    "<fim_suffix>",
    "<fim_middle>",
    "<file_sep>",
    "<|endoftext|>",
    "</fim_middle>",
    "</code>",
    "/src/",
    "#- coding: utf-8",
    ""
  ],
  "raw": true
}

##### Prompt #####
<fim_prefix>#include <stdio.h>

int main (int argc, char** argv){
if (/* condition */)
{
    /* code */
}
<fim_suffix>
}<fim_middle>==========================================================================
==========================================================================
Completion:
AnoyiX commented 2 days ago

same issue

00010110 commented 2 days ago

same issue +1

sestinj commented 2 days ago

@carmelomigliore the problem is "useLegacyCompletionsEndpoint": false, which forces Continue to attempt to use the /v1/chat/completions endpoint, which is not compatible with autocomplete. Does your server support /v1/completions or a dedicated FIM endpoint?

AnoyiX commented 1 day ago

@sestinj you can see this doc: https://docs.siliconflow.cn/guides/fim , thanks for your help

AnoyiX commented 1 day ago

@sestinj my config:

{
  "models": [
    {
      "title": "Qwen",
      "provider": "openai",
      "model": "Qwen/Qwen2.5-72B-Instruct",
      "apiKey": "sk-****************************************",
      "apiBase": "https://api.siliconflow.cn/v1"
    }
  ],
  "tabAutocompleteModel": {
    "title": "Qwen",
    "provider": "openai",
    "model": "Qwen/Qwen2.5-Coder-32B-Instruct",
    "apiKey": "sk-****************************************",
    "apiBase": "https://api.siliconflow.cn/v1"
  }
}  

the chat work, but autocomplete not work.

carmelomigliore commented 1 day ago

@carmelomigliore the problem is "useLegacyCompletionsEndpoint": false, which forces Continue to attempt to use the /v1/chat/completions endpoint, which is not compatible with autocomplete. Does your server support /v1/completions or a dedicated FIM endpoint?

Tried also without "useLegacyCompletionsEndpoint": false, continue will switch to the old /v1/completions but autocomplete still does not work