mida-project / prototype-multi-modality-assistant

[IJHCS] An assistant prototype for breast cancer diagnosis prepared with a multimodality strategy. The work was published in the International Journal of Human-Computer Studies.
https://mida-project.github.io/prototype-multi-modality-assistant
Other
2 stars 1 forks source link

Assistant Pre-User Testing Number Seven Improvements #18

Closed FMCalisto closed 5 years ago

FMCalisto commented 5 years ago

On an early phase (issue #13), we developed an Assistant to help us on the automation of cancer diagnosis. By creating a large and available dataset, it is now possible to have recommendation methodologies (i.e., AI Algorithms) that automatically can classify the set of medical images and lesions detection, giving a high probability to those cases where intervention is necessary. These methodologies will bring us a high impact to the clinical field. Such automation is crucial since it reduces the inspection performed by the radiologists, that is still rudimentary in current clinical setups. Besides having large datasets, the follow-up of the patient is crucial. This dataset means that the annotation of a given patient should be analyzed through time.

Based on the literature we plan to improve and prepare our Assistant for future user tests, within a context of scaling our solution. We aim to understand how clinical institutions can use our system with impactful healthcare systems.

In this set of issues, our requirements are as follows. A threefold of conditions must be addressed across the final solution. We aim to achieve some prototype improvements to support our Assistant future user tests, more detailed below.

List of enhancing features from pre-user testing phases:

The first Choosing Randomly N Patients Routine from M Patients Set (issue #19) task is simple, as we just need to restrict some already implemented features. The features are basic functionalities. The idea here is just to control randomly the way that the studyList.json file is generating the Patient ID respectively. We need to force the studyList.json file to generate only N patients, from a set of M patients. Simple as that.

Developing the Creating eXplainability (XAI) Feature Element (issue #20) will be a simple task. The idea is to develop the same concept as demonstrated in the following Figure. When the user press the Explain button, the button will open the Heatmap prototype (issue #5 of the prototype-heatmap repository) for the respective Patient ID and images.

If the user wants to Reject (above Figure) the result provided by the Assistant, our system must provide a solution to it (issue #21). We created the User Edit Control of the AI Answer on Reject (issue #21) for that purpose. Therefore, when the user press the Reject button, the User Interface (UI) will show several options to edit the final answer. From here, we can calculate and compare with the number of True-Positives, True-Negatives, False-Positives and False-Negatives values.

FMCalisto commented 5 years ago

By mistake, this issue was closed. We will reopen it again.

FMCalisto commented 5 years ago

This issue will be Closed. The reasons can be found here.