huggingface / llm-ls

LSP server leveraging LLMs for code completion (and more?)
Apache License 2.0
602 stars 49 forks source link

Proposal: Launching LLM server as a daemon #89

Open blmarket opened 7 months ago

blmarket commented 7 months ago

Proof-of-concept: https://github.com/blmarket/llm-ls

Hi,

I'm wondering llm-ls can incorporate dedicated LLM server provider within LSP server, preferably as a shared instance via daemonize. Idea was inspired by bazel client/server.

It works as following:

Still many things are hardcoded (such as LLM path, various file path, model path), but it's usable with existing llm.XXX editor plugins.

Want to know whether this is something llm-ls would like to support as one of its backends, or better be kept as different project.

blmarket commented 6 months ago

I used my prototype since then. Lessons learned - llm-ls make too many requests and llama.cpp server did not support cancellation. I had to implement my own lock to make the server process only the latest request.

As it is beyond the proof-of-concept I'll keep this version separated. Maybe I better just fork as these features may not align with huggingface's interest?

McPatate commented 4 months ago

Hello @blmarket thank you for opening the issue. This is definitely interesting to me, as it simplifies installation (to some extent) and operation on you local machine (cf my comment in https://github.com/huggingface/llm.nvim/issues/87).

I would be happy for you to contribute your work on this topic, if possible, in the form of small incremental PRs.