Closed userbm closed 1 year ago
Hi @userbm ,
You can obtain the contributions for all types of explanations by using the methods to calculate them. They are showcased in our notebook AReM.ipynb. These are: local_pruning, local_event, local_feat, local_cell_level, prune_all, event_explain_all, feat_explain_all
.
Additionally, if you pass the argument "path" in the explanation configuration dictionaries, you can have the contributions being saved on a csv on your local device.
If you have any further questions, don't hesitate to contact us
Thank you @JoaoPBSousa for the reply. AReM.ipynb was very explanatory. Going through the code, I find that event-wise shap values can be obtained through either of the following:
Hi @userbm ,
In TimeSHAP you can only obtain the event-wise shapley values through the method timeshap.explainer.event_level.event_level
, but this method is currently wrapped by timeshap.explainer.event_level.local_event
which is the one users are supposed to use.
Regarding the methods you mentioned:
timeshap.explainer.pruning.local_pruning()
is responsible for calculating the pruning algorithm values. Once this values are obtained, a pruning index can be calculated by applying the user-defined pruning tolerance.timeshap.explainer.kernel.TimeShapKernel.shap_values()
is responsible for calculating all types of explanations, depending on the explanation mode you provide when creating the TimeShapKernel object.If you only require standard explanations from TimeSHAP, I would recommend using the following methods: local_pruning, local_event, local_feat, local_cell_level, prune_all, event_explain_all, feat_explain_all
. If you need to use other methods in the package, feel free to ask any questions you may have.
In response to your question about the pruning index, I would need more context to provide a specific answer. In TimeSHAP, the pruning index indicates that all older events (in respect to the index) are to be grouped as to not waste computational power on using them for coalitions and scoring. This index affects the calculated explanations as it as it changes the number of events that are perturbed to compute them.
I hope this answer is helpful. If you have any further questions feel free to ask.
Thank you @JoaoPBSousa for the detailed reply. This sufficiently answers my doubts.
I need the values of the event/feature contributions as variables, instead of plots. Is it possible to get that?