This is a llamacpp based provider that only supports infill with codellama 7b and 13b, which require special token handling. Note that the base models seem to perform better than Instruct models. I'm also not sure how to actually add instructions to a FIM prompt.
This is a llamacpp based provider that only supports infill with codellama 7b and 13b, which require special token handling. Note that the base models seem to perform better than Instruct models. I'm also not sure how to actually add instructions to a FIM prompt.
https://github.com/gsuuon/llm.nvim/assets/6422188/751310e5-4758-4adc-a69d-fb7c5c892490