triton-inference-server / onnxruntime_backend

The Triton backend for the ONNX Runtime.
BSD 3-Clause "New" or "Revised" License
125 stars 54 forks source link

how to use onnxruntime profiling in triton #207

Open cyh-ustc opened 1 year ago

cyh-ustc commented 1 year ago

Is your feature request related to a problem? Please describe. triton trace api timing only contains total inference time how to get detailed timing like operator level. or kernel level?

Describe the solution you'd like

maybe by allowing onnxruntime enable profile options in model configuration

Additional context

https://onnxruntime.ai/docs/performance/tune-performance/profiling-tools.html