Is your feature request related to a problem? Please describe.
triton trace api timing only contains total inference time
how to get detailed timing like operator level. or kernel level?
Describe the solution you'd like
maybe by allowing onnxruntime enable profile options in model configuration
Is your feature request related to a problem? Please describe. triton trace api timing only contains total inference time how to get detailed timing like operator level. or kernel level?
Describe the solution you'd like
maybe by allowing onnxruntime enable profile options in model configuration
Additional context
https://onnxruntime.ai/docs/performance/tune-performance/profiling-tools.html