The most no-nonsense, locally or API-hosted AI code completion plugin for Visual Studio Code - like GitHub Copilot but completely free and 100% private.
Describe the bug
FIM template Codeqwen doesn't send file context.
To Reproduce
Select provider with FIM Template codeqwen - there is no File Context in prompt (checking on LLM's server).
Select provider with FIM Template starcoder2 - File Context is present.
Expected behavior
FIM provider codeqwen should add File Context to prompt.
Logs from LLM's server
INFO PROMPT=
<|fim_prefix|>-- test<|fim_suffix|><|fim_middle|>
Logging
Original completion: s for the 'with' statement
Formatted completion: s for the 'with' statement
Max Lines: 30
Use file context: true
Completed lines count 1
Using custom FIM template fim.bhs?: false
workbench.desktop.main.js:sourcemap:617 [Extension Host] [twinny] ***Twinny Stream Debug***
Streaming response from llm_server.local:42069.
Request body:
{
"prompt": "<|fim_prefix|>-- test<|fim_suffix|><|fim_middle|>",
"stream": true,
"temperature": 0.2,
"max_tokens": 64
}
API Provider
oobabooga
Chat or Auto Complete?
FIM (auto-complete, autocomplete (for searchability))
Describe the bug FIM template Codeqwen doesn't send file context.
To Reproduce Select provider with FIM Template
codeqwen
- there is no File Context in prompt (checking on LLM's server).Select provider with FIM Template
starcoder2
- File Context is present.Expected behavior FIM provider codeqwen should add File Context to prompt.
Logs from LLM's server
Logging
API Provider oobabooga
Chat or Auto Complete? FIM (auto-complete, autocomplete (for searchability))
Model Name qwen2.5-coder-7B`
Version Extension version: v3.17.30