Closed region23 closed 9 months ago
hello @region23
yes it is possible to use a local model. What you'd need to do is:
change it to your local endpoint
and make sure to update the prompt template
HF Code Error: code - 400; msg - Bad Request
curl to API is working
For now ollama's API is not supported, it's on the todo list though!
Also created an issue for llama.cpp : https://github.com/huggingface/llm-ls/issues/28
This issue is stale because it has been open for 30 days with no activity.
+1
This issue is stale because it has been open for 30 days with no activity.
Is there a timeline for when feat: Add adaptors for ollama and openai #117 might be merged?
Finishing the last touches of fixes on llm-ls and testing everything works as expected for 0.5.0
and we should be good to go for a release. I'd say either I find some time this week-end or next week :)
With the publication of codellama, it became possible to run LLM on a local machine using ollama or llama.cpp. How to configure your extension to work with local codellama?