I want to use a custom model via Ollama, to which I gave the name copilot. I did ollama create copilot -f copilot and running ollama run copilot works in the terminal, and other extensions also work well with it. However, when I start Llama Coder and set the custom model to 'copilot,' the extension tells me that copilot wasn't downloaded and asks if I want to download it. If I click on 'No,' nothing happens. Looking at the logs, I see "Ingoring since the user asked to ignore download.".
How does llama-coder check if a model exists with ollama?
How can I use this model, as I already manually created it using the gguf file and ollama create?
Thanks in advance!
I want to use a custom model via Ollama, to which I gave the name copilot. I did
ollama create copilot -f copilot
and runningollama run copilot
works in the terminal, and other extensions also work well with it. However, when I start Llama Coder and set the custom model to 'copilot,' the extension tells me that copilot wasn't downloaded and asks if I want to download it. If I click on 'No,' nothing happens. Looking at the logs, I see "Ingoring since the user asked to ignore download.".How does llama-coder check if a model exists with ollama?
How can I use this model, as I already manually created it using the gguf file and ollama create? Thanks in advance!