Closed robertpiosik closed 2 months ago
Hello, I just released a new version which adds <file_sep
to stop words for starcoder models. Many thanks.
https://github.com/twinnydotdev/twinny/commit/057c1a17a4474c22172d1f916dc657f0e0c6de66
@rjmacarthy What your change does is removing <file_sep>
token from output. What needs to be done is removing also all the following tokens after <file_sep>
. It looks broken on starcoder end but it is what it is 🤷
So now, with this change I can't use my hacky method of splitting on <file_sep>
because it's no longer there:P
Ok I did a little research and what is needed is ability to set stop sequence on options
key in request body.
here is what works with llm vscode
extension:
"llm.requestBody": {
"stream": true,
"options": {
"stop": [
"<file_sep>"
],
"temperature": 0,
}
},
Hello @robertpiosik the change to remove the stop word is the correct approach as we don't want in the final output, in order for you to continue working with starcoder2 you need to create the correct Modelfile for Ollama and specify your stop words in the configuration. All the best. https://github.com/ollama/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values
Ok, thank you!
Hey @robertpiosik did that work for you? Many thanks.
I decided to use llm-vscode
. Setting up custom request body with a stop sequence is more convenient to me, also I don't need Twinny's sidebar features as my vram capacity can fit only phi3-mini which I find quite poorly performing in my use case. Anyway, fantastic work with the extension, cheers!
Ok no worries. Thanks for the help, all the best!
Describe the bug When using starcoder2:3b my completions contain fim tokens. My workaround is to split output on.
To Reproduce starcoder2:3b with ollama
Screenshots