bentoml / Yatai

Model Deployment at Scale on Kubernetes 🦄️
https://bentoml.com
Other
790 stars 69 forks source link

Provide the feature to save all model input/output to an external storage for debugging or replay later #317

Open withsmilo opened 2 years ago

withsmilo commented 2 years ago

Hi, BentoML team. This is a new suggestion for Yatai. In general, when serving a ML model, the input provided to the model and the output returned by the model are stored in an external storage, and used for debugging or replay. This is a feature required by most ML services that require " feedback ", and it would be good if BentoML / Yatai provides this as a default.

ssheng commented 2 years ago

@withsmilo we are in the design phase of a model monitoring solution where we offer APIs for logging features and inference results and configurations for shipping the logs to a destination of choice. If possible, we can get on a call walk through our design with you and verify that it meets your requirements.

withsmilo commented 2 years ago

@ssheng really great news! Thanks!