Open schinto opened 3 years ago
Hi! We are aware of the importance of explainable models in biomedicine, and we have considered it as a part of our future plan. Unfortunately, due to the limited manpower we have now, such feature may not be realized very soon. Thanks for your understanding.
Hi!
Are you going to add functionality for the interpretation of GNN models to torchdrug?
There are benchmarks datasets Benchmarks for interpretation of QSAR models and there is a whole bunch of different methods Explainability in Graph Neural Networks: A Taxonomic Survey. Unfortunately, I haven't seen a method which directly combines explainability and uncertainty quantification (like evidential deep learning). That would be really helpful for our medicinal chemists to understand why a decision was made by a model und how certain the model is about the decision.
Thanks