isee4xai / iSeeOnto

iSeeOnto, the ontology network created by the iSee consortium for sharing and reusing explanation experiences. For more information see https://isee4xai.com/
https://isee4xai.com/
0 stars 1 forks source link

Explanation/Explainability Technique subclasses #16

Closed anjanaw closed 5 months ago

anjanaw commented 1 year ago

Explanations that answer questions like "What kind of algorithm is used in the system?" and "What is the scope of the system’s capability?" can be some descriptions of the method or the model like "It uses a CNN architecture neural network trained to minimise prediction error against some training data" and "The goal of this model is to predict the probability of tumour in a given radiograph".

What explanation subclass should be used here? Contextual explanation class seems to be a suitable superclass, but can we have more descriptive subclasses? Also, can we have subclasses for Explainability Techniques under Data-driven and Knowledge extraction for techniques that generate these types of explanations?

ike01 commented 1 year ago

On the last bit, we currently have Caption Generation and DisCERN as subclasses of Data-Driven Technique. What other subclasses are needed?

anjanaw commented 1 year ago

Whatever you want to call explanations like this "It uses a CNN architecture neural network trained to minimise prediction error against some training data". I'm not sure what to call them, hence the examples.

anjanaw commented 1 year ago

Also type of explanations that answer questions like below. For those the explainability technique would be simply calling the AI Model to get the outcome for the modified instance.

Question --> Explanation How does the model respond if I change feature X to value V? --> New outcome would be outcome B What is the outcome if I change feature X to value V? --> Outcome would change to B from A

dcorsar commented 1 year ago

Question --> Explanation How does the model respond if I change feature X to value V? --> New outcome would be outcome B What is the outcome if I change feature X to value V? --> Outcome would change to B from A

Are the types of questions / responses explanations of the system's reasoning process and decision it made, or different queries to be made of the model? It may be the latter are being made as part of an explanation strategy to help the user understand how changes in the income impact the outcome, but are they explanations in their own right? Or maybe they could be be modelled as some kind of "Query Based Explanation Strategy"?

dcorsar commented 1 year ago

Explanations that answer questions like "What kind of algorithm is used in the system?" and "What is the scope of the system’s capability?" can be some descriptions of the method or the model like "It uses a CNN architecture neural network trained to minimise prediction error against some training data" and "The goal of this model is to predict the probability of tumour in a given radiograph".

Would these be best captured in some form of Factual Query explanation that is effectively asking questions about the meta-data of a model?

anjanaw commented 1 year ago

My understanding is they are explanations of their own right as iSee design requires answering these types of questions from the user, the only way to answer them is to treat them as explanations generated by some explainer.