huggingface / llm-ls

LSP server leveraging LLMs for code completion (and more?)
Apache License 2.0
602 stars 49 forks source link

Add Llamacpp support #81

Closed FredericoPerimLopes closed 4 months ago

FredericoPerimLopes commented 7 months ago

Add Llamacpp support

McPatate commented 5 months ago

Hi @FredericoPerimLopes, sorry for the late reply. I'm not convinced of the need for this PR as LlamaCpp supports OpenAI-style API.

Is there value in supporting the "native" llamacpp API?

rggs commented 4 months ago

Ok here's why I think this is necessary.

  1. llama.cpp implements the OpenAI chat api, not the legacy completions api.
  2. llama.cpp can, however, accept a request in the form of the legacy completion api via their /completion endpoint (as opposed to /v1/completions for the OAI models).
  3. The legacy AOI completions api repsonse is of the form response.choices, so you get the first item of the choices array.
  4. The llama.cpp completion response is of the form response.content.

So it seems like the easiest option is to either create a separate llama.cpp backend adapter, or create an additional config option for something like llama.cpp_response = True so that it uses the OpenAI adapter, but gets the completion from response.content instead of response.choices.

The end result of all this is I'm able to make a request to the llama.cpp server but none of the suggestion shows in VSCode.

julien-c commented 4 months ago

would be cool imo!

McPatate commented 4 months ago

Closing in favor of https://github.com/huggingface/llm-ls/pull/94

McPatate commented 4 months ago

@FredericoPerimLopes thank you for your contribution, I added you as co-author on the commit that landed in main