huggingface / llm-ls

LSP server leveraging LLMs for code completion (and more?)
Apache License 2.0
552 stars 43 forks source link

Deepseek Coder not working #92

Open rhusiev opened 4 months ago

rhusiev commented 4 months ago

When trying to use deepseek coder (via ollama) and its tokenizer and tokens for fim, the result seems completely irrelevant (or, maybe, cut off). However, when using the prompt I would expect to go to the model directly in the ollama, everything works fine:

image

vs

image

Here is my config for llm.nvim:

require("llm").setup({
    model = "deepseek-coder:1.3b-base",
    enable_suggestions_on_startup = true,
    accept_keymap = "<C-M-j>",
    dismiss_keymap = "<C-M-k>",
    tokens_to_clear = {
        "<|endoftext|>",
    },
    fim = {
        enabled = true,
                prefix = "<|fim▁begin|>",
                middle = "<|fim▁hole|>",
                suffix = "<|fim▁end|>"
    },
    backend = "ollama",
    debounce_ms = 0,
    url = "http://localhost:11434/api/generate",
    context_window = 240,
    -- cf https://github.com/ollama/ollama/blob/main/docs/api.md#parameters
    request_body = {
        -- Modelfile options for the model you use
        options = {
            num_predict = 4,
            temperature = 0.2,
            top_p = 0.95,
        },
    },
    lsp = {
        bin_path = vim.api.nvim_call_function("stdpath", { "data" }) .. "/mason/bin/llm-ls",
    },
    tokenizer = {
                repository = "deepseek-ai/deepseek-vl-1.3b-base", -- not working for some reason
    },
})

I believe it is a problem with how llm-ls handles it, but if I am wrong, I will open an issue on the llm.nvim github