I want to contribute with these papers. Did not have the write access to create a branch and submit a pull request. So creating this issue, can u please add the following papers to this list.
"Logic Explained Networks" Authors: Gabriele Ciravegna, Pietro Barbiero, Francesco Giannini, Marco Gori,
Pietro Liò, Marco Maggini, Stefano Melacci Journal: Artificial Intelligence Summary: Logic Explained Networks (LENs) offer interpretable deep learning models that provide human-understandable explanations using First-Order Logic, outperforming traditional white-box models in both supervised and unsupervised learning tasks.
"Entropy-based Logic Explanations of Neural Networks" Authors: Pietro Barbiero, Gabriele Ciravegna, Francesco Giannini, Pietro Lió, Marco Gori, Stefano Melacci Conference: AAAI 2022 Summary: A novel end-to-end approach extracts concise First-Order Logic explanations from neural networks using an entropy-based criterion, improving both interpretability and classification accuracy in safety-critical domains.
"Dividing and Conquering a BlackBox to a Mixture of Interpretable Models: Route, Interpret, Repeat" Authors: Shantanu Ghosh, Ke Yu, Forough Arabshahi, Kayhan Batmanghelich Conference: ICML 2023 Summary: This paper introduces a method to iteratively carve concept-based interpretable models from a Blackbox in a post-hoc manner, using First Order Logic for explanations, while a residual network handles harder cases, achieving high interpretability without sacrificing performance.
"Distilling BlackBox to Interpretable models for Efficient Transfer Learning" Authors: Shantanu Ghosh, Ke Yu, Forough Arabshahi, Kayhan Batmanghelich Conference: MICCAI 2023 Summary: This paper presents an concept-based interpretable model for chest-X-ray classification that can be efficiently fine-tuned for new domains using minimal labeled data, leveraging semi-supervised learning and distillation from blackbox models.
Hi @aisagarw,
I want to contribute with these papers. Did not have the write access to create a branch and submit a pull request. So creating this issue, can u please add the following papers to this list.
"Logic Explained Networks"
Authors: Gabriele Ciravegna, Pietro Barbiero, Francesco Giannini, Marco Gori, Pietro Liò, Marco Maggini, Stefano Melacci
Journal: Artificial Intelligence
Summary: Logic Explained Networks (LENs) offer interpretable deep learning models that provide human-understandable explanations using First-Order Logic, outperforming traditional white-box models in both supervised and unsupervised learning tasks.
"Entropy-based Logic Explanations of Neural Networks"
Authors: Pietro Barbiero, Gabriele Ciravegna, Francesco Giannini, Pietro Lió, Marco Gori, Stefano Melacci
Conference: AAAI 2022
Summary: A novel end-to-end approach extracts concise First-Order Logic explanations from neural networks using an entropy-based criterion, improving both interpretability and classification accuracy in safety-critical domains.
"Dividing and Conquering a BlackBox to a Mixture of Interpretable Models: Route, Interpret, Repeat"
Authors: Shantanu Ghosh, Ke Yu, Forough Arabshahi, Kayhan Batmanghelich
Conference: ICML 2023
Summary: This paper introduces a method to iteratively carve concept-based interpretable models from a Blackbox in a post-hoc manner, using First Order Logic for explanations, while a residual network handles harder cases, achieving high interpretability without sacrificing performance.
"Distilling BlackBox to Interpretable models for Efficient Transfer Learning"
Authors: Shantanu Ghosh, Ke Yu, Forough Arabshahi, Kayhan Batmanghelich
Conference: MICCAI 2023
Summary: This paper presents an concept-based interpretable model for chest-X-ray classification that can be efficiently fine-tuned for new domains using minimal labeled data, leveraging semi-supervised learning and distillation from blackbox models.
Regards Shantanu