Essentially answer the following three research questions.
What forms a (good) explanation in software engineering tasks?: No clear answer but constitutes four elementary explanation types.
How do we build explainable software analytics models and how do we generate explanations from them?: A model can be explainable in three ways -
The model itself is an explanation (global explanation)
Explaining a single instance (local explanation)
Explaining the learning algorithm
Simple models such as decision trees are often more explainable than sophisticated models like neural networks.
How might we evaluate explainability of software analytics models?:
A simple measure for explainability is the size of a model.
Conducting experiments with practitioners is the best way to evaluate. Machine-produced explanations can be compared against explanations produced by human engineers.
Contributions of The Paper
Defines explainability in software engineering as “explainability or interpretability of a model measures the degree to which a human observer can understand the reasons behind a decision (e.g. a prediction) made by the model”
Discusses different approaches to design, produce, and evaluate explanations
Comments
Clean and concise
The need for explanation better be supported by a user-study
Publisher
Proceedings - International Conference on Software Engineering
Link to The Paper
https://doi.org/10.1145/3183399.3183424
Name of The Authors
Dam, Hoa Khanh; Tran, Truyen; Ghose, Aditya
Summary
Essentially answer the following three research questions.
How do we build explainable software analytics models and how do we generate explanations from them?: A model can be explainable in three ways -
Simple models such as decision trees are often more explainable than sophisticated models like neural networks.
Contributions of The Paper
Comments