SeldonIO / MLServer

An inference server for your machine learning models, including support for multiple frameworks, multi-model serving and more
https://mlserver.readthedocs.io/en/latest/
Apache License 2.0
730 stars 183 forks source link

Add support to provide feedback #42

Open adriangonz opened 4 years ago

adriangonz commented 4 years ago

Seldon Core currently supports providing a “reward signal” as feedback for model’s predictions. This is received by the model as a request sent to a /feedback endpoint. Since this modifies the server protocol, it would be good to consider adding this as a server extension for MLServer.

duychu commented 2 years ago

@adriangonz do we have a plan to add it in a near future ?

adriangonz commented 2 years ago

Hey @duychu,

We want to add it, but it's not on the immediate roadmap yet. We're still trying to figure out what's the best way to introduce this in the architecture at the serving orchestrator level (i.e. at the Seldon Core / KServe level).