Closed seansaito closed 3 years ago
@vishalmour @sinhadebarchan @manduripramodh let's use this issue as a place to further discussion and development of a decoupling strategy.
Hey @vishalmour (hello from Japan!), any updates on this?
hello @seansaito, sorry for late response on this.. Currently we are trying a POC to check if we can make a reference to predict function (part of deployed model) n the runtime, and package a dummy one. If this works we can possibly remove the dependency between model and the explainer for LIME and SHAP Kernel. But since optimized SHAP explainers such as Tree explainer etc. need Model as part of explainer generation, the above approach will not work... @manduripramodh @sinhadebarchan please add if I missed something
Close the issue and move it into Project TO-DO.
Today, the explainers expect a
predict_fn
at the build step, which provides explainers with access to the black-box model for generating explanations. However, this introduces a dependency between the explainer and the black-box model which could produce difficulties in deploying explainer artifacts into some productive space.We should come up with an option that allows users to properly decouple the explainer artifact and the
predict_fn
to enable different kinds of deployment strategies. Inspiration comes from https://github.com/SeldonIO/seldon-core, which allows production-level deployment of ML models (kudos to @vishalmour and @sinhadebarchan for the share).