Open adriangonz opened 4 years ago
@adriangonz do we have a plan to add it in a near future ?
Hey @duychu,
We want to add it, but it's not on the immediate roadmap yet. We're still trying to figure out what's the best way to introduce this in the architecture at the serving orchestrator level (i.e. at the Seldon Core / KServe level).
Seldon Core currently supports providing a “reward signal” as feedback for model’s predictions. This is received by the model as a request sent to a
/feedback
endpoint. Since this modifies the server protocol, it would be good to consider adding this as a server extension for MLServer.