Open LNTH opened 2 hours ago
Ran into this same issue.
Found a workaround to get some completions.
{
"models": [
{
"title": "Qwen2.5-Coder-7b-Instruct",
"provider": "vllm",
"model": "Orion-zhen/Qwen2.5-Coder-7B-Instruct-AWQ",
"apiBase": "http://10.0.0.10:8000/v1"
}
],
"tabAutocompleteModel": {
"title": "Qwen2.5-Coder-7b-Instruct",
"provider": "openai",
"apiKey": "None",
"completionOptions": {
"stop": [
"<|endoftext|>",
"\n"
]
},
"apiBase": "http://10.0.0.10:8000/v1/",
"model": "Orion-zhen/Qwen2.5-Coder-7B-Instruct-AWQ"
},
"tabAutocompleteOptions": {
"multilineCompletions": "never",
"template": "You are a helpful assistant.<|fim_prefix|>{{{ prefix }}}<|fim_suffix|>{{{ suffix }}}<|fim_middle|>"
},
"customCommands": [],
"allowAnonymousTelemetry": false,
"docs": []
}
Namely the tabAutocompleteOptions template, and the model provider being openai with the completion options stop including the two entries.
Before submitting your bug report
Relevant environment info
Description
The
tabAutoComplete
feature is not displaying any suggestions in the VS Code editor.To reproduce
"GET /v1/models HTTP/1.1" 200 OK
whenever theconfig.json
is modified.Expected Behavior
Auto-completion suggestions should appear in the VS Code editor.
Actual Behavior
vllm
server received"POST /v1/completions HTTP/1.1" 200 OK
but nothing show on VsCode Editor. VsCode Console displayedError generating autocompletion: TypeError: Cannot read properties of undefined (reading 'includes')
Additional Observations
After this error occurs, the Continue extension no longer sends POST /v1/completions requests to the vllm server.
Log output