[IJHCS] An assistant prototype for breast cancer diagnosis prepared with a multimodality strategy. The work was published in the International Journal of Human-Computer Studies.
Based on Ribeiro et al. (video) work it could be interesting when we have the automatic classification of a new patient, the system will show also others similar patients where the same classification was found. These similar patients must be already verified by the clinician, to guarantee the reliability of the assistant.
Based on Ribeiro et al. (video) work it could be interesting when we have the automatic classification of a new patient, the system will show also others similar patients where the same classification was found. These similar patients must be already verified by the clinician, to guarantee the reliability of the assistant.
These ideas are the basis of the work raised from researching the eXplainable Artificial Intelligence (XAI) topic. Already addressed by Andreas Holzinger, we can ask from here What do we need to build explainable AI systems for the medical domain? and some Design Recommendations to Support Automated Explanation and Tutoring to bring the topic to our work. Nevertheless, on Medium.com platform we have an article called Explainable Artificial Intelligence (Part 1) — The Importance of Human Interpretable Machine Learning that despite being more informal, might be interesting to read.
Look at also, the presentation of Rune Sætre is showing a trial lecture regarding the XAI topic. Finally, the Human-Agent Teaming as a Common Problem for Goal Reasoning work by Molineaux et al., is a good example for applying a Psychological Model of Explanation to the topic. Note that it is also important to look at the INSIDE Project and try to find something.