NorskRegnesentral / shapr

Explaining the output of machine learning models with more accurately estimated Shapley values
https://norskregnesentral.github.io/shapr/
Other
138 stars 32 forks source link

Add tests using testthat #11

Open nikolase90 opened 5 years ago

nikolase90 commented 5 years ago

If you're working on one of these files, please add the url to your branch or the pull request. Mark the box if the changes are merged with master.

R-files

src-files

The following files should not be tested

martinju commented 4 years ago

I am getting a bit worried that every time we make a change to shap or explain functions, we have to re-create the test objects (explanation_explain_obj.rds and similar) which compares the results of the previous run to pass the tests. That means we don't have a good test for this. This is especially dangerous if one does many changes in the same PR (although we seldom do that).

My suggestion is to add a seperate test that checks only the shapley values from the different test models. We could run a lapply call to the ex_list, extract only the data.table with the shapley values, and then do a testthat::expect_known_value or testthat::expect_known_object on that. What do you think @nikolase90 ? Or is there a better option?

nikolase90 commented 4 years ago

@martinju I think that sounds like a good way to do it.