Open longle255 opened 3 weeks ago
Hi @longle255 , any chance you are seeing the same error reported here? https://github.com/continuedev/continue/issues/2457#issuecomment-2400013843
@Patrick-Erichsen I can't tell for sure if the causes of those errors was the same. I don't see similar error message in vscode debug console. in fact I don't see any error at all, in any output
@Patrick-Erichsen
I'm facing same issue, the reason why the completion was a space character is because the stop
list contains "```".
And Qwen2.5-Coder-7B-Instruct completion result is
```python\n # Uncomment the following line to set a custom chat template\n # runtime.endpoint...
Modify stop list and it works. But I didn't figure out how to config continue to make this works.
Same with Intellij IDEA
##### Completion options #####
{
"contextLength": 8192,
"maxTokens": 8192,
"temperature": 0,
"topP": 1,
"presencePenalty": 0,
"frequencyPenalty": 0,
"model": "deepseek-chat",
"stop": [
"<|fim▁begin|>",
"<|fim▁hole|>",
"<|fim▁end|>",
"//",
"<|end▁of▁sentence|>",
"\n\n",
"\r\n\r\n",
"/src/",
"#- coding: utf-8",
"```",
"\nclass",
"\nfunction"
],
"raw": true
}
##### Prompt #####
After some debugging into continue code, I think I found the cause of my issue, it's the contextLength
of tabAutocompleteModel
.
Looks like by default the contextLength for tabAutocompleteModel
is 8096, while my option for "completionOptions": { "maxTokens": 8192 }
. That make the prompt to be pruned completely.
So my fix was adding contextLength
option to tabAutocompleteModel
"tabAutocompleteModel": {
"title": "qwen2.5-coder:7b-base",
"provider": "ollama",
"model": "qwen2.5-coder:7b-base",
"contextLength": 16384
},
"completionOptions": {
"maxTokens": 8192
}
this fixes the problem for me.
@facelezzzz I think it would also fix the problem for you, as your context length and maxToken were the same.
@longle255 It works again! Thank you for your efforts.
Thanks for the writeup here @longle255 ! We should definitely provide a better warning in this case.
Curious why both of you are explicitly setting your completionOptions
to 8192
?
@Patrick-Erichsen As far as I remember, I had a problem with Continue where the the generated answer was cut off in the middle for quite a few times for me. After digging a while I found an issue here talking about the same problem https://github.com/continuedev/continue/issues/2319 and they mentioned about the option in one of the comment. Adding the option did fix the problem for me.
Before submitting your bug report
Relevant environment info
Description
Autocomplete worked for me before, until recently (yesterday or today, I'm not sure when), and then it stopped working. The chat feature is still working fine with any models. Only autocomplete has the problem.
Output data of autocomplete shows it did not generate the prompt. The output of ollama shows no prompt was provided.
I tried a couple of versions of Continue (release, pre-release), but the problem persists.
To reproduce
No response
Log output
output from ollama showing no prompt entered