oegedijk / explainerdashboard

Quickly build Explainable AI dashboards that show the inner workings of so-called "blackbox" machine learning models.
http://explainerdashboard.readthedocs.io
MIT License
2.31k stars 332 forks source link

added dropna to avoid crash on nan values #275

Closed AlexanderZender closed 4 months ago

AlexanderZender commented 1 year ago

added copy to any predict or predict_proba call to only pass definitiv copies. fixed issue when categorical features have nan as values, a crash occurs added the option to allow nan category values in frontend

issue: https://github.com/oegedijk/explainerdashboard/issues/273

AlexanderZender commented 1 year ago

During the initial opening sequence of the dashboard, a prediction is made using all nan values:

image

This causes an exception in the console that the encoder of the model can not deal with the passed values. This is accurate as e.g. Name is a string and is passed -999. This initial exception does not impact the dashboard as it seems, I do not know if this intended or not to always do 'nan' predictions during the initial opening of the dashboard. But I wanted to note it here.

AlexanderZender commented 1 year ago

I added a 'NaN' category value to the categorical_dict if the categorical feature has NaN as values. Without this, the model encoder will break as always -999 is passed, instead of NaN which is a known/allowed value.

oegedijk commented 1 year ago

I'm not sure why we are adding all those .copy() to the prediction calls? prediction shouldn't have any side effects on X, so why is this needed?

oegedijk commented 1 year ago

could you add some tests that shows this works?

AlexanderZender commented 1 year ago

The copy is technically not needed but it will avoid errors, for example if you pass a pipeline that manipulate the incoming dataframe within its process. These changes are indirectly applied to the dataframe within the explainer dashboard. The next time explainer dashboard will call the predict proba function, it will use the manipulated dataframe. This will cause a mismatch within the pipeline of the model, as the dataframe does not match.

You can let the user manage that, but it may not always be possible to perform a copy command within the pipeline or own code.

Edit: as your comment asked about the predict function, I was unsure if this problem also occurred there. I only observed the predict_proba function causing issues but maybe there it is unnecessary with predict? I can remove the copies for predict.

AlexanderZender commented 1 year ago

@oegedijk I added the requested test, and removed copy from all predict function calls. I added a test to test categorical labels, this will fail as explainer dashboard does not currently support these.

I'm not completely sure how many changes are needed to allow categorical labels, but the test is already there with a dataset to test with. The dataset is an adjusted subset of the car dataset from OpenML Maybe this is something to look into, as there are numerous datasets with categorical labels, and as i can see explainer dashboard already converts numerical labels to strings (?)

oegedijk commented 1 year ago

I added the requested test, and removed copy from all predict function calls.

I guess all these predict functions are only predicting a single row, so maybe the cost of adding these copy's is not so bad? Alternative would be only doing them for pipelines, but that would also introduce additional overhead.

oegedijk commented 1 year ago

I added a test to test categorical labels, this will fail as explainer dashboard does not currently support these.

I think it should be possible to reuse the titanic test set? Just replace the 0,1 labels with 'survived', 'not survived'. For testing the NaN's, we could just randomly sprinkle some NaNs in. there.

What would be the fastest/cheapest model that works well with missing values to minimize test time? (HistGradientBoostingClassifier comes to min, but maybe there are faster options)

AlexanderZender commented 1 year ago

I added the requested test, and removed copy from all predict function calls.

I guess all these predict functions are only predicting a single row, so maybe the cost of adding these copy's is not so bad? Alternative would be only doing them for pipelines, but that would also introduce additional overhead.

I think the overhead depends on the complexity of the dataset that gets used. When is the predict function used by the explainerdashboard?

AlexanderZender commented 1 year ago

I added a test to test categorical labels, this will fail as explainer dashboard does not currently support these.

I think it should be possible to reuse the titanic test set? Just replace the 0,1 labels with 'survived', 'not survived'. For testing the NaN's, we could just randomly sprinkle some NaNs in. there.

What would be the fastest/cheapest model that works well with missing values to minimize test time? (HistGradientBoostingClassifier comes to min, but maybe there are faster options)

That would be possible if this does not cause issues with other tests. As I'm not familiar with the entire test suit, I avoid changing it, but I can adapt my tests.

As for the training time, I just used a RandomForestClassifier which trains in less than a second on my system. In my "pipeline" NaN values are not passed to the model, as the one hot encoder ignores them. The test checks if NaN in the original dataset do not break the dashboard, what the pipeline does with the NaN is irrelevant.

AlexanderZender commented 1 year ago

I updated the test to use the available Titanic data. I perform the required manipulation in the test, e.g. adding NaN in Name or LabelEncode the label

oegedijk commented 1 year ago

As for the training time, I just used a RandomForestClassifier which trains in less than a second on my system. In my "pipeline" NaN values are not passed to the model, as the one hot encoder ignores them. The test checks if NaN in the original dataset do not break the dashboard, what the pipeline does with the NaN is irrelevant.

Ah, apologies. I didn't fully understand then, I thought you were thinking of algorithms that are able to deal with missing values (such as e.g. HistGradientBoostingClassifier), but you mean pipelines that fill in NaNs. Both might be something we should be able to support.

oegedijk commented 1 year ago

Will have a closer look at the code hopefully tomorrow or this weekend. But thanks already for this contribution, let's try to get it ready and released quickly!

AlexanderZender commented 1 year ago

Will have a closer look at the code hopefully tomorrow or this weekend. But thanks already for this contribution, let's try to get it ready and released quickly!

Sounds good, tell me if you find any issue with it. Will you have time to take a look at the categorical labels? I think it would be great to merge this together

AlexanderZender commented 1 year ago

You are correct, i removed it

AlexanderZender commented 1 year ago

@oegedijk Totally forgot that there was a conflict, ups

AlexanderZender commented 9 months ago

@oegedijk Any info when this would be looked at?