triton-inference-server / server

The Triton Inference Server provides an optimized cloud and edge inferencing solution.
https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html
BSD 3-Clause "New" or "Revised" License
8.34k stars 1.48k forks source link

Allow to pass custom logging format via options #5333

Open riZZZhik opened 1 year ago

riZZZhik commented 1 year ago

Hello!

Problem is: I need to use custom logs format but unable to do it right now without custom backend

Can you add possibility to pass custom logs format via options?

Thank you in advance!

riZZZhik commented 1 year ago

Maybe change --log-format option?

Right now its options are "default" and "ISO8601". \ Code: main.cc#L359 and grpc_server.cc#L1544

rmccorm4 commented 1 year ago

Hi @riZZZhik,

Thanks for raising this request. Similar issue https://github.com/triton-inference-server/server/issues/3765 is looking specifically for JSON or XML support.

Are you looking for the same, or are you looking to provide an arbitrary log format string? If the latter, can you share some examples of what you're looking for?

riZZZhik commented 1 year ago

Hello @rmccorm4,

Issue #3765 is enough to use logs collectors e.g. ElasticSearch, but I'm looking to provide an arbitrary format.

I am not sure how logging works in cpp, but I can try to suggest format similar to logging.Formatter in Python docs and add variables like request_id and sentence_id

Example value for option: %(asctime)s.%(msecs)03d [%(levelname)8s] --- [%(name)s] --- [%(request_id)36s] --- %(message)s

riZZZhik commented 1 year ago

Hi @rmccorm4,

Any updates about this issue?

rmccorm4 commented 1 year ago

Hi @riZZZhik, there are no plans at this time to allow an arbitrary format.

We'd likely need to pre-define a set of acceptable fields before allowing users to specify the format, similar to what the Python log formatter accepts. If you were able to enumerate the most useful/important fields you would like to see, and help elaborate why the existing logging formats are not suitable for your needs, then this may be help when considering it in the future.

Lastly, what is the definition of sentence_id here? Did you mean sequence_id? Or some language-model specific detail?

riZZZhik commented 1 year ago

Hi @rmccorm4,

It's sad to hear that there are no plans to allow an arbitrary format at this time 😔

I suggest the following, but I'm sure there are many more: Key Description
message The logged message
traceback Detailed message of the exception
level Text logging level for the message ('INFO', 'WARNING', 'ERROR' ... etc.)
backend Which backend logged it
module/file Module (name portion of filename)
time Timestamp in human-readable form.
timestamp Unix timestamp in nanoseconds
model_instance_name Model instance name
model_version Model version
request_id Request id passed to InferenceRequest
batch_id (if possible) Some unique Batch id passed to the model instance name.

Apologies, I don't remember the definition of "sentence_id," but I'm certain it wasn't a detail specific to the model. It was probably a typo of "sequence_id," as you mentioned.

Csehpi commented 3 months ago

Hello @nnshah1 ,

any updates about this feature request? Are you planning to introduce it?

Thanks

nnshah1 commented 3 months ago

@Csehpi - Unfortunately, there are no current plans to provide a more generic logging format customization. We recently updated to ensure the message portion is escaped via JSON string encoding rules - but did not have an opportunity to make the message apis more customizable.

We would be happy to consider a contribution in this area though - we agree it would be valuable.