viadee / javaAnchorExplainer

Explains machine learning models fast using the Anchor algorithm originally proposed by marcotcr in 2018
BSD 3-Clause "New" or "Revised" License
15 stars 2 forks source link

Limited Results, Low Coverage #30

Open KuzonFyre opened 1 year ago

KuzonFyre commented 1 year ago

I am using this with Anchors Adapter and my implementation is similar to the titanic dataset located in https://github.com/viadee/xai_examples/tree/master for the titanic dataset. My results are coming in with very low coverage. That is the first problem. What does this suggest about my data? Or is there a problem with how I encoded it?

IF *FeatureName* 1.0 {1, -0.98}
THEN PREDICT 0
WITH PRECISION 1 AND COVERAGE 0.02

A few details here, this is running in a streaming application. Data comes in by the line and I run data preprocessing on it. In order to get and explanation, I am forced to convert it to a TabularInstance along with discresionized version of the data. Here is a method I created.

        ArrayList<GenericColumn> anchorFeatures = new ArrayList<>();
        for (Field field : Prediction.class.getDeclaredFields()) {
            field.setAccessible(true);
            Object value = field.get(instance);
            if (value instanceof Integer) {
                anchorFeatures.add(new IntegerColumn(field.getName()));
            }else if (value instanceof Double){
                anchorFeatures.add(new DoubleColumn(field.getName()));
            }
        }
        return anchorFeatures.toArray(new GenericColumn[0]);
    }

First off it would be great that Adapters supported an easy way to convert one line of data using .build. Second, tabular.getVisualizer().visualizeResult(anchor)); is giving me an issue because IntegerColumn does set the discretionizer. I wish I could give an easy method for replicating this issue.

fkoehne commented 1 year ago

Hey - good to hear, that you are working with our implementation! It has not seen a lot of maintenance lately and we are discussing how to proceed from here. Hence, I am interested to hear what you are planning to do with it. Would you mind to share your plans?

TobiasGoerke commented 1 year ago

You've hit a rule that has a very high precision with low coverage. This is both one of Anchors' features and shortcomings and provides valuable insights to you.

Ribeiro et al. note that:

A rule that exactly matches x is always a valid anchor, albeit with very low coverage, and thus, is of little use. Our algorithm can always recover this anchor, and thus is guaranteed to terminate in a bounded number of iterations. In pathological cases, it is possible for KL-LUCB to require a very large number of samples from D in order to separate two candidate rules with a high confidence; however, this can be alleviated by increasing the tolerance , the width δ, or by setting a maximum number of samples

This is usually the case, when the explained instances

are near a boundary of the black box model’s decision function, or predictions of very rare classes [...], where a particular prediction is very near the boundary of the decision function – almost any change to the instance results in a change in prediction.

I'd guess this is the case for you. Your model focuses on exactly one feature manifestation of your instance and everytime it gets changed in its perturbation space, the model takes another feature into account for decision. Your model is both very sure and can't generalize this prediction (possibly interpretable as being overfitted).

Hope that helps. Also, very excited you're using this project and interested in what for :)

KuzonFyre commented 1 year ago

@TobiasGoerke Thanks this is all helpful information. @fkoehne It's a bussiness implementation so I can't share exact details. The idea is to have a batch process that rebuilds the model using h2o and their mojo/pojo conversion. I load up the model.zip file in a springboot app in java and pull in live data from a streaming process. As a result, my training data and data to predict are seperate. In the titantic example, the explained instance is coming from the training data that was put into an AnchorTabular object. As a result I have to build a TabularInstance object to calculate a explainable outcome. It would be nice if the Adapters extension has a build() method that would construct a TabularInstance and create the discresionezed versions of my data so I don't have to do that manually.

I ultimatly want to use the explinations in a UI. For live predictions, I want to construct a natural language explination of the outvome the model suggests by using a combination of Shapley values and Anchors. For example, "The model suggests approving this request because, [DATA] is greater than 5". It doesn't have to look like that but the idea is that their is an easy way for a user to interpret a decision the model made.

Where I can see future work here is more integration with h2o and easier data conversion.

Feel free to ask more questions. I am really excited the work you are both doing!

TobiasGoerke commented 1 year ago

It would be nice if the Adapters extension has a build() method that would construct a TabularInstance and create the discresionezed versions of my data so I don't have to do that manually.

Not sure if I understand your request correctly. There's a default builder already that you can use. You'd just need to create a TabularInstance before. Automatically discretizing data is difficult to impossible if that is what you're asking for. However, it makes sense to spend time on this as discretization often determines the success of an explainer (or ML model).

I ultimatly want to use the explinations in a UI. For live predictions, I want to construct a natural language explination of the outvome the model suggests by using a combination of Shapley values and Anchors. For example, "The model suggests approving this request because, [DATA] is greater than 5". It doesn't have to look like that but the idea is that their is an easy way for a user to interpret a decision the model made.

That sounds great! Combinding various mechanisms and visualizations is always a good idea.

By the way: I've written a chapter about Anchors in Interpretable Machine Learning. Here, you'll find a visualization technique for Anchors that we've written. It's currently only available for R but it may inspire you on how to preprocess the results for your users: sample visualization

We're working on similar things, focusing on MLOps and machine learning lifecycles, improving their maturity (XAI being an important component here) for production use-cases in general, using the cloud and tools like Kubeflow.

Where I can see future work here is more integration with h2o and easier data conversion. Feel free to ask more questions. I am really excited the work you are both doing!

While we're aware some users are actively using this project, we haven't yet received all too much valuable feedback, so thank you, yours is very much appreciated. In case you decide to move forward with this Anchors implementation and bring it to production, we'd be very interested to hear what your journey was like and how the tool helps you in real-life situations. Also, being a consultancy, we'd be happy to help you besides contributing to open-source. Feel free to send me a message in case you're interested and would like to talk about possible options.