huggingface / llm-ls

LSP server leveraging LLMs for code completion (and more?)
Apache License 2.0
602 stars 49 forks source link

feat: Add adaptors for various backends (ollama, tgi, api-inference) #40

Closed noahbald closed 9 months ago

noahbald commented 11 months ago

This should resolve issue https://github.com/huggingface/llm-ls/issues/17, by allowing users to remap "input" to "prompt" add adding { model: "model:nb-code" }

This will require further changes on clients as well (such as llm.nvim) - happy to contribute here as well

closes #17, closes #28

noahbald commented 11 months ago

Hey McPatate, my limited understanding is that a server usually responds with a text stream. If not the case I can see how that would need to be addressed, maybe for a separate PR.

I feel that adding support through an identifier might be more hefty to maintain, if the project ends up having logic to handle dozens of possible backends I could see it becoming bloated. Perhaps supporting a plugin or middleware approach would make more sense, and let the community create what they need for their backend when they need it.

I'm still playing around with this, but I'm aiming to support usage of Ollama through llm.nvim

McPatate commented 11 months ago

Ollama's response is not supported by llm-ls currently, it needs to be handled otherwise it won't work.

I wonder if we'll ever get to dozens of different backends. I believe at some point we'll stabilise on an API and people will implement "the standard" rather than re-invent a new thing. But yes at first we'll need to implement a few of them before that crystallises. Not sure if the plugin route will solve anything and might be too much work compared to just adding different backends.

noahbald commented 10 months ago

@McPatate haven't had a proper test yet, I'd like to get your thoughts on the implementation of the adaptors and whether you think they can be improved.

Users will now be able to specify an adaptor in their configuration, which will handle the transformation of requests and responses to be compatible with the given backend.

noahbald commented 10 months ago

@McPatate do you think it's worth me re-implementing https://github.com/huggingface/llm-ls/pull/42 as an adaptor?

McPatate commented 10 months ago

@McPatate do you think it's worth me re-implementing https://github.com/huggingface/llm-ls/pull/42 as an adaptor?

Yes good idea

noahbald commented 10 months ago

Thanks for the feedback @McPatate, it's been really helpful - already looking a lot better from what you've suggested. Keen to know if there's anything else you think we need to get this through

McPatate commented 10 months ago

Thanks for the feedback @McPatate, it's been really helpful

Thank you for your contribution!

Keen to know if there's anything else you think we need to get this through

How have you tested your PR? If the CI passes and your testing is conclusive then we should be able to merge.

We will need to update all clients to match the new API as well, is this something you'd be willing to do as well?

noahbald commented 10 months ago

Thanks for the feedback @McPatate, it's been really helpful

Thank you for your contribution!

Keen to know if there's anything else you think we need to get this through

How have you tested your PR? If the CI passes and your testing is conclusive then we should be able to merge.

We will need to update all clients to match the new API as well, is this something you'd be willing to do as well?

I've been testing by hacking around in llm.nvim. I'll need to retest though as I haven't tested this in a good while. I don't think I have time to update all the clients, but I can make a PR for what I've done in llm.nvim when ready

noahbald commented 10 months ago

I'm trying to test this but running into an issue where it's complaining about the case of the request parameters given. image

I'm using https://github.com/noahbald/llm.nvim as my client here.

Is this something to do with #[serde(rename_all = "camelCase")]?

McPatate commented 10 months ago

Is this something to do with #[serde(rename_all = "camelCase")]?

Yes, I've recently changed the API to make the case consistent across parameters. I haven't released a version of llm-ls yet since it's a breaking change. I was waiting for a few other things before creating a new release, like your PR and what I'm currently working on.

noahbald commented 10 months ago

image I've got it working using my fork of llm.nvim - I'll make a PR for that!

You may want to test with openai and huggingface as well. I don't have accounts with either so I haven't tested there

manish-baghel commented 10 months ago

@noahbald @McPatate Thanks for the amazing work.

I managed to test it locally. Models:

  1. Open Hermes Mistral 2.5-7B merged with Intel neural chat v3.3
  2. Open Hermes Mistral 2.5-7B by Teknium

2023-12-10-053607_853x254_scrot

However, I had to copy paste all snake_case configs to camelCase

  local params = lsp.util.make_position_params()
  params.model = utils.get_model()
  params.tokens_to_clear = config.get().tokens_to_clear
  params.tokensToClear = config.get().tokens_to_clear
  params.api_token = config.get().api_token
  params.apiTokens = config.get().tokens_to_clear
  params.request_params = config.get().query_params
  params.request_params.do_sample = config.get().query_params.temperature > 0
  params.requestParams = config.get().query_params
  params.requestParams.doSample = config.get().query_params.temperature > 0
  params.fim = config.get().fim
  params.tokenizer_config = config.get().tokenizer
  params.tokenizerConfig = config.get().tokenizer
  params.context_window = config.get().context_window
  params.contextWindow = config.get().context_window
  params.tls_skip_verify_insecure = config.get().tls_skip_verify_insecure
  params.tlsSkipVerifyInsecure = config.get().tls_skip_verify_insecure
  params.adaptor = config.get().adaptor
  params.request_body = config.get().request_body
  params.requestBody = config.get().request_body
  params.ide = "neovim"

Hopefully, the changes will get merged to main branch soon. Really excited for this one 🤠

noahbald commented 10 months ago

@noahbald @McPatate Thanks for the amazing work.

I managed to test it locally. Models:

  1. Open Hermes Mistral 2.5-7B merged with Intel neural chat v3.3
  2. Open Hermes Mistral 2.5-7B by Teknium

2023-12-10-053607_853x254_scrot

However, I had to copy paste all snake_case configs to camelCase

  local params = lsp.util.make_position_params()
  params.model = utils.get_model()
  params.tokens_to_clear = config.get().tokens_to_clear
  params.tokensToClear = config.get().tokens_to_clear
  params.api_token = config.get().api_token
  params.apiTokens = config.get().tokens_to_clear
  params.request_params = config.get().query_params
  params.request_params.do_sample = config.get().query_params.temperature > 0
  params.requestParams = config.get().query_params
  params.requestParams.doSample = config.get().query_params.temperature > 0
  params.fim = config.get().fim
  params.tokenizer_config = config.get().tokenizer
  params.tokenizerConfig = config.get().tokenizer
  params.context_window = config.get().context_window
  params.contextWindow = config.get().context_window
  params.tls_skip_verify_insecure = config.get().tls_skip_verify_insecure
  params.tlsSkipVerifyInsecure = config.get().tls_skip_verify_insecure
  params.adaptor = config.get().adaptor
  params.request_body = config.get().request_body
  params.requestBody = config.get().request_body
  params.ide = "neovim"

Hopefully, the changes will get merged to main branch soon. Really excited for this one 🤠

Thanks for testing, the casing changes are expected at the moment, McPatate is working on that separately

Yes, I've recently changed the API to make the case consistent across parameters. I haven't released a version of llm-ls yet since it's a breaking change. I was waiting for a few other things before creating a new release, like your PR and what I'm currently working on.

McPatate commented 9 months ago

Since I can't seem to run the CI without risking to leak the tokens to the world, I ran it locally and things seem to be broken:

cargo +nightly run --bin testbed -r -- --api-token $API_TOKEN -r `pwd`/crates/testbed/repositories-ci.yaml -f simple

Produces:

Repository name Source type Average hole completion time (s) Pass percentage
simple local 0.14864999 0%
Total -- 0.14864999 0%

When it should be 100%

noahbald commented 9 months ago

@McPatate I'm not sure how the tests are set up here. I tried running your command but I'm getting an IO error. I set up a Hugging Face account to play around with it in the editor and it seems to work :\

Do you have any advice to how I can approach the failing tests, or would you rather have a look into it on your end?

McPatate commented 9 months ago

I'd suggest using a debugger or just adding logs in the the testbed code and try to see where the IO error is coming from. You have to be at the root of the folder to run the command and your API_TOKEN should be your HF token.

If you're really struggling I'll take a look. I haven't completely documented testbed and there are some subtleties that may not be very user friendly at times, sorry about that.

noahbald commented 9 months ago

image I've done a sync with main and unfortunately I can't seem to run the tester at all in my environment (wsl)

McPatate commented 9 months ago

Try setting LOG_LEVEL=debug when running testbed to see if you get more information. You can set the log level per crate as well, see how RUST_LOG format works, it's the same for the LOG_LEVEL var.

Also, you'll need to run testbed from the root of the directory, so:

- -r ../../crates/testbed/repositories-ci.yaml
+ -r crates/testbed/repositories-ci.yaml

With pwd returning /path/to/llm-ls

noahbald commented 9 months ago

Thanks Luc, I hadn't run a release build so llm-ls was missing from target/release. I've run the test and it's passing for me. Maybe I'm missing something or maybe it was fixed with a recent commit.

2023-12-31T03:53:29.708931Z  INFO testbed: 726: simple from local obtained 100.00% in 3.544s
2023-12-31T03:53:29.709274Z  INFO testbed: 747: all tests were run, exiting
McPatate commented 9 months ago

Thank you very much for the hard work on this PR @noahbald, especially while putting up with my incessant pestering 😉