awsm-research / PyExplainer

PyExplainer: A Local Rule-Based Model-Agnostic Technique (Explainable AI)
MIT License
29 stars 9 forks source link

Questions about the results obtained by XAI method #22

Open 9527-ly opened 1 year ago

9527-ly commented 1 year ago

I found a strange phenomenon. For the same model, the same training sample and test sample, other operations are identical. Theoretically, the values obtained by using the XAI method (like Saliency) to evaluate the interpretability of the model should be the same. However, I retrained a new model, and the interpretability values obtained are completely different from those obtained from the previous model. Does anyone know why this happens? The interpretability value is completely unstable, and the results cannot be reproduced. Unless I completely save this model after training it, and then reload this parameter, the results will be the same. Does anyone know why