Closed mahyahemmat closed 1 year ago
Hi, thanks for using DSM ! The output of a survival model is a function of time. SHAP methods instead are designed to work with probabilistic outputs for binary outcomes.
One can use SHAP values by looking at the output of DSM at a fixed horizon of time, let's say prediction of 5 year survival.
There is a simple workaround to get SHAP to work with DSM, write an external function to encode the model and call the predict survival function at a fixed time horizon. You can then effectively treat the model as a binary classification model at that time horizon.
Hope this helps
Thanks, this worked with using shap.KernelExplainer !
awesome! would love to see how DSM performs on your target application! Keep in touch..!
Hi,
I'm trying to find a way to interpret the model outputs by visualizing feature importance in the output. Is there a way to do that?
shap.DeepExplainer usually works with Pytorch models but I've been having problem with making it work for a DSM model output of tuple type (scale, shape, logits).