SilasMarvin / lsp-ai

LSP-AI is an open-source language server that serves as a backend for AI-powered functionality, designed to assist and empower software engineers, not replace them.
MIT License
1.82k stars 55 forks source link

Allow to specify local model when using `llama_cpp` #27

Closed SuperBo closed 1 week ago

SuperBo commented 1 week ago

For now, LSP-AI will try to download model file when config with llama_cpp. Are there anyway for user to specify to an already downloaded model in local folder?

SuperBo commented 1 week ago

Never mind, I found file_path option. I think this should be documented in wiki.

{
  memory = {
    file_store = {},
  },
  models = {
    model1 =  {
      type="llama_cpp",
      repository= "Qwen/CodeQwen1.5-7B-Chat-GGUF",
      name= "codeqwen-1_5-7b-chat-q4_k_m.gguf",
      file_path="/Users/user1/model/codeqwen-1_5-7b-chat-q4_k_m.gguf",
      n_ctx=2048,
      n_gpu_layers= 1000,
    }
  }
}
SilasMarvin commented 1 week ago

Glad you were able to find it! It is documented under: https://github.com/SilasMarvin/lsp-ai/wiki/Configuration#llamacpp but that configuration page is getting so large it may be worth redoing our docs at some point soon into a small website. I should also note if you are setting the file_path you don't have to set the repository and name.