huggingface / llm-vscode

LLM powered development for VSCode
Apache License 2.0
1.24k stars 133 forks source link

Working with ollama or llama.cpp #60

Closed region23 closed 9 months ago

region23 commented 1 year ago

With the publication of codellama, it became possible to run LLM on a local machine using ollama or llama.cpp. How to configure your extension to work with local codellama?

mishig25 commented 1 year ago

hello @region23

yes it is possible to use a local model. What you'd need to do is:

  1. Serve the model locally at some endpoint
  2. And change the settings accordingly

change it to your local endpoint

image

and make sure to update the prompt template

image image
region23 commented 1 year ago

HF Code Error: code - 400; msg - Bad Request

Снимок экрана 2023-08-31 в 18 07 59
region23 commented 1 year ago

curl to API is working

Снимок экрана 2023-08-31 в 18 29 09
McPatate commented 1 year ago

For now ollama's API is not supported, it's on the todo list though!

cf https://github.com/huggingface/llm-ls/issues/17

McPatate commented 1 year ago

Also created an issue for llama.cpp : https://github.com/huggingface/llm-ls/issues/28

github-actions[bot] commented 1 year ago

This issue is stale because it has been open for 30 days with no activity.

flaviodelgrosso commented 11 months ago

+1

github-actions[bot] commented 10 months ago

This issue is stale because it has been open for 30 days with no activity.

jefffortune commented 10 months ago

Is there a timeline for when feat: Add adaptors for ollama and openai #117 might be merged?

McPatate commented 9 months ago

Finishing the last touches of fixes on llm-ls and testing everything works as expected for 0.5.0 and we should be good to go for a release. I'd say either I find some time this week-end or next week :)