Open hanulpark98 opened 2 months ago
Hi,
Thank you for your message and interest in our work! I believe this should definitely be possible, ideally by using treeSHAP [1] directly. However, this would involve converting GRANDE from its dense format to a more common tree architecture. While this is a purely technical task, it might be challenging for those who are just beginning to work with our code.
In fact, this is something I’ve wanted to do for some time, as it may also improve inference speed. I’ll aim to write a function for this when I can find the time, which should make it easily compatible with (tree) SHAP.
[1] Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., ... & Lee, S. I. (2020). From local explanations to global understanding with explainable AI for trees. Nature machine intelligence, 2(1), 56-67.
Hi i've been really enjoying the work you guys have made. But is there a way a could use XAI methods for GRANDE like SHAP values for the interpretability?
i've been working out with the kernel shap method and trying to fix the data shapes that fit in into the kernel_explainer finding it a little bit hard.