huggingface / llm-ls

LSP server leveraging LLMs for code completion (and more?)
Apache License 2.0
553 stars 43 forks source link

[Suggestion] Metrics support #56

Open DanielAdari opened 6 months ago

DanielAdari commented 6 months ago

First of all, amazing project!

We've started experimenting with the project on an on-premise offline environment, so far it works great!

We need our extensions to send metrics and events to a centralized backend in order to have usage statistics for our company users.

Do you think it fits in the LLM LS or should it go in the extension itself?

Are you guys planning on adding support for other (optional & opt-in) telemetry events?

Thanks! 🙃

McPatate commented 6 months ago

Hi @DanielAdari! I've been thinking about this for a while and I'd like to add telemetry to llm-ls.

I'm unsure how to materialise this though, I don't want to have coupling with a specific backend.

Also I'm wondering if I want to enable this as a compile time feature and produce binaries with & without telemetry code or if it should be a runtime flag.

As for a timeline, I've been pretty busy with other topics and haven't had the time to give some love to our extensions, so cannot promise anything 😅

DanielAdari commented 6 months ago

That's good to hear!

I'm sure plenty of organizations would benefit from this feature.

It seems to me that a runtime flag would be simpler to implement and would result in less releases to manage.

It would definitely be a hard one to implement a generic way of pushing metrics and completion events.

You should definitely fix some of the more urgent bugs in the LS and extensions before starting with this 😄

McPatate commented 5 months ago

Also note that you can already parse the log file from llm-ls which is located ~/.cache/llm_ls/llm-ls.log.

It records logs as json with context data surrounding the log line. You can probably already compute metrics from there.