Open carmelomigliore opened 3 days ago
same issue
same issue +1
@carmelomigliore the problem is "useLegacyCompletionsEndpoint": false
, which forces Continue to attempt to use the /v1/chat/completions endpoint, which is not compatible with autocomplete. Does your server support /v1/completions or a dedicated FIM endpoint?
@sestinj you can see this doc: https://docs.siliconflow.cn/guides/fim , thanks for your help
@sestinj my config:
{
"models": [
{
"title": "Qwen",
"provider": "openai",
"model": "Qwen/Qwen2.5-72B-Instruct",
"apiKey": "sk-****************************************",
"apiBase": "https://api.siliconflow.cn/v1"
}
],
"tabAutocompleteModel": {
"title": "Qwen",
"provider": "openai",
"model": "Qwen/Qwen2.5-Coder-32B-Instruct",
"apiKey": "sk-****************************************",
"apiBase": "https://api.siliconflow.cn/v1"
}
}
the chat
work, but autocomplete
not work.
@carmelomigliore the problem is
"useLegacyCompletionsEndpoint": false
, which forces Continue to attempt to use the /v1/chat/completions endpoint, which is not compatible with autocomplete. Does your server support /v1/completions or a dedicated FIM endpoint?
Tried also without "useLegacyCompletionsEndpoint": false
, continue will switch to the old /v1/completions
but autocomplete still does not work
Before submitting your bug report
Relevant environment info
Description
Hello, i'm unable to utilize tab autocomplete with openai provider and qwen 2.5 coder 3B. It seems to me that continue is not able to parse the answer (see the logs).
I am also utilizing Qwen-2.5-Coder-32B for chat and editing and it works fine.
To reproduce
Write a file with the following content:
While writing "if" it seems the llm correctly guessed what I was trying to do, but probably not in the format expected by continue
Log output