Closed parano closed 3 years ago
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This needs further investigation - closing for now
Is your feature request related to a problem? Please describe.
When a new version of the model is produced, it is a common practice to compare its performance & behavior against the previous version of the model. BentoML produces prediction logs when API server is serving production traffic. Each prediction log record contains both the inference request input data and the inference result. If data scientist can easily utilize the prediction logs and apply them to a BentoService from the development or CI environment, it can allow them to get feedback and compare differences more efficiently.
See related discussion on online shadow deployment: https://github.com/bentoml/BentoML/discussions/1051
Describe the solution you'd like
Describe alternatives you've considered Suggestions are welcomed!
Additional context This is dependent on the Input/Output Adapter Refactoring #1002