huggingface / text-generation-inference

Large Language Model Text Generation Inference
http://hf.co/docs/text-generation-inference
Apache License 2.0
8.36k stars 947 forks source link

Add Environment Variable for OTLP Service Name #2069

Open KevinDuffy94 opened 3 weeks ago

KevinDuffy94 commented 3 weeks ago

Feature request

Currently the Service Name for OTLP is hard-coded as "text-generation-inference.server" Could an environment variable be added which could set this. Something like...

resource = Resource.create( attributes={"service.name": os.getenv("OTLP_SERVICE_NAME","text-embeddings-inference.server")} )

And similar in the main.rs file in the router.

Motivation

Currently I have multiple models deployed and will be collected into a single APM so this will be problematic.

Your contribution

Unconfident in writing any rust code.

LysandreJik commented 3 weeks ago

WDYT @OlivierDehaene? Are you open for this addition?

LysandreJik commented 3 weeks ago

Seen with Olivier, sounds good! Would you like to open a PR similarly to https://github.com/huggingface/text-embeddings-inference/pull/285?