huggingface / llm-ls

LSP server leveraging LLMs for code completion (and more?)
Apache License 2.0
553 stars 43 forks source link

Only warn of rate-limits when using HF endpoint #58

Closed HennerM closed 5 months ago

HennerM commented 6 months ago

I am trying the llm-vscode extension with llm-ls on a locally hosted endpoint (running a custom fine-tuned model), however the extension still gives a warning that I might get rate limited by HuggingFace.

Since inference doesn't run on a HuggingFace server this warning is not necessary.

HennerM commented 5 months ago

Since we know have the adaptor setting, I'd rather check its value than doing so w/ the URL. Wdyt?

Yes good idea, I changed this to just check the adopter value now