Open zddzxxsmile opened 3 years ago
Hi,
Thank you for your question. In principle, all sequences should contribute the same to the final values of the network parameters. One thing you can do to check this is to monitor each layer's activations with your sequence of interest.
After training, the networks operate as if they were trained using supervised learning, so all the interpretability methodologies should apply. I would recommend a tool called SHAP used by doctors in explainable AI (https://github.com/slundberg/shap)
Dear authors,
Your work is very valuable. I am a doctor worked in a hospital, not very familiar to the code. Here is a question, how to know which specific sequence contributed to the classification?
Thanks very much.