Tempo currently has an implementation for an Insights Logger, which lets its internal runtime log metrics and payloads. Since this is relative low-level, it could be interesting to move this feature to MLServer, so that every inference runtime can leverage it.
Tempo currently has an implementation for an Insights Logger, which lets its internal runtime log metrics and payloads. Since this is relative low-level, it could be interesting to move this feature to MLServer, so that every inference runtime can leverage it.