huggingface / llm-vscode

LLM powered development for VSCode
Apache License 2.0
1.18k stars 127 forks source link

Working with ollama or llama.cpp #60

Closed region23 closed 5 months ago

region23 commented 11 months ago

With the publication of codellama, it became possible to run LLM on a local machine using ollama or llama.cpp. How to configure your extension to work with local codellama?

mishig25 commented 11 months ago

hello @region23

yes it is possible to use a local model. What you'd need to do is:

  1. Serve the model locally at some endpoint
  2. And change the settings accordingly

change it to your local endpoint

image

and make sure to update the prompt template

image image
region23 commented 11 months ago

HF Code Error: code - 400; msg - Bad Request

Снимок экрана 2023-08-31 в 18 07 59
region23 commented 11 months ago

curl to API is working

Снимок экрана 2023-08-31 в 18 29 09
McPatate commented 10 months ago

For now ollama's API is not supported, it's on the todo list though!

cf https://github.com/huggingface/llm-ls/issues/17

McPatate commented 9 months ago

Also created an issue for llama.cpp : https://github.com/huggingface/llm-ls/issues/28

github-actions[bot] commented 8 months ago

This issue is stale because it has been open for 30 days with no activity.

flaviodelgrosso commented 7 months ago

+1

github-actions[bot] commented 6 months ago

This issue is stale because it has been open for 30 days with no activity.

jefffortune commented 5 months ago

Is there a timeline for when feat: Add adaptors for ollama and openai #117 might be merged?

McPatate commented 5 months ago

Finishing the last touches of fixes on llm-ls and testing everything works as expected for 0.5.0 and we should be good to go for a release. I'd say either I find some time this week-end or next week :)