charmlab / mace

Model Agnostic Counterfactual Explanations
87 stars 11 forks source link

Issue in running project #7

Open HeyItsBethany3 opened 3 years ago

HeyItsBethany3 commented 3 years ago

Hi, I am trying to use this library and have followed the README but I'm getting this error when I use the command: python batchTest.py -d credit -m mlp -n one_norm -a MACE_eps_1e-3 -b 0 -s 1 I've attached the error below.

Screenshot 2021-03-05 at 17 04 45

Do you have any advice to solve this? Do I need to load the data and model in some way? Also, does your MACE (or MINT) algorithm work well for gradient boosting models? Thanks so much.

amirhk commented 3 years ago

Hi Bethany,

Feel free to comment line 86 in loadModel.py and re-run; this line is there to prevent running recourse for poorly performing models (where pool performance is arbitrarily chosen as 70% accuracy).

At the moment, MACE supports LR, MLP, TREE, and FOREST models that are part of sklearn. Support for new model classes can be added when the logic-equivalent model implementation is added to modelConversion.py.

Hope this helps!

Amir

On Fri, Mar 5, 2021 at 8:43 PM Bethany Ebel notifications@github.com wrote:

Hi, I am trying to use this library and have followed the README but I'm getting this error when I use the command: python batchTest.py -d credit -m mlp -n one_norm -a MACE_eps_1e-3 -b 0 -s 1 I've attached the error below. [image: Screenshot 2021-03-05 at 17 04 45] https://user-images.githubusercontent.com/24189365/110149531-11339a80-7dd6-11eb-8342-e3f6cbdc6c63.png

Do you have any advice to solve this? Do I need to load the data and model in some way? Also, does your MACE (or MINT) algorithm work well for gradient boosting models? Thanks so much.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/amirhk/mace/issues/7, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFIFWZ4AV3Z3R3OGGQMQP3TCEGL3ANCNFSM4YVSLZHA .

-- Amir-Hossein Karimi | amirhkarimi.com Machine Learning PhD Candidate Max Planck ETH Center for Learning Systems

HeyItsBethany3 commented 3 years ago

Hi Amir, Thanks for this. Would you mind explaining a little more how I could adapt the repo to add a GBM model? I'm struggling to understand how 'modelConversion.py' relates to the other files. It looks like the main logic in modelConversion.py is in the formula functions, what different logic would I need for a GBM model?

amirhk commented 3 years ago

Hi Bethany

modelConversion.py is the primary file that builds an equivalent first-order-logic-based formula corresponding to the ML model. This method is called in generateSATExplanations.py where the obtained model formula is conjuncted with other formula (counterfactual, plausibility, etc) for the problem of counterfactual explanation generation.

Based on the explanations founds here [1,2], the difference between random forest and gradient boosted tree models is in their training but not so much in their prediction (i.e., for prediction, you average over the estimators_ inside your ensemble, with equal or different weights depending on contribution to ensemble). Thus, I would image that not much would need to change in modelConversion.py for GB-models, except for perhaps adding support for weighted voting among ensemble members. Please see https://github.com/amirhk/mace/blob/master/modelConversion.py#L186-208 for reference; you should be able to re-use most of this, and also be able to call tree2formula as a sub-process.

Best, Amir

[1] https://www.datasciencecentral.com/profiles/blogs/decision-tree-vs-random-forest-vs-boosted-trees-explained [2] https://stats.stackexchange.com/questions/173390/gradient-boosting-tree-vs-random-forest [3] https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingClassifier.html

On Sat, Mar 13, 2021 at 4:59 PM Bethany Ebel @.***> wrote:

Hi Amir, Thanks for this. Would you mind explaining a little more how I could adapt the repo to add a GBM model? I'm struggling to understand how 'modelConversion.py' relates to the other files. It looks like the main logic in modelConversion.py is in the formula functions, what different logic would I need for a GBM model?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/amirhk/mace/issues/7#issuecomment-798380617, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFIFW7AAEVT5KKYHFLJWIDTDNSE3ANCNFSM4YVSLZHA .

-- Amir-Hossein Karimi | amirhkarimi.com Machine Learning PhD Candidate Max Planck ETH Center for Learning Systems

HeyItsBethany3 commented 3 years ago

Hi Amirhk,

Thanks for this, this is really helpful!

I was experiencing the issue in https://github.com/amirhk/mace/issues/8, so I've followed the instructions explained in there.

I installed Python-MIP using pip install mip. Then I cloned the repo and checked out the commit: git clone "https://github.com/ustunb/actionable-recourse.git" cd actionable-recourse git checkout 9387e6c

I added the directory to my path: (actionable-recourse is in a directory called recourse) echo 'export PATH="$PATH:/home/recourse"'

Was this the correct approach?

I am now getting the error: File "batchTest.py", line 127 raise Exception(f'{approach_string} not recognized as a validapproach_string.')

When I run python batchTest.py -a AR or python batchTest.py -d credit -m forest -n one_norm -a MACE_eps_1e-5 -b 0 -s 1.

I was also getting the error 'actionableRecourse approach only works with larger.' from line 335 in batchTest.py. What does this mean?

Thanks so much for all your support.

HeyItsBethany3 commented 3 years ago

I am now running python batchTest.py -d credit -m forest -n one_norm -a MACE_eps_1e-5 -b 0 -s 1 within the virtual environment instead, but it is taking a very long time. How long does it usually take for the simulations to run?

amirhk commented 3 years ago

Glad to hear MACE is running for you now. The forest and mlp models take some time to run, much longer than tree and lr. One way to speed things up is replace MACE_eps_1e-5 with MACE_eps_1e-3 or MACE_eps_1e-2. The optimality guarantees are more or less preserved, but this should run considerably faster. Finally, a newer CPU generation and larger RAM would naturally help as well :)

On Fri, Mar 19, 2021 at 4:51 PM Bethany Ebel @.***> wrote:

I am now running python batchTest.py -d credit -m forest -n one_norm -a MACE_eps_1e-5 -b 0 -s 1 within the virtual environment instead, but it is taking a very long time. How long does it usually take for the simulations to run?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/amirhk/mace/issues/7#issuecomment-802828258, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFIFWZOPESJA5N5LQW7P63TENFWJANCNFSM4YVSLZHA .

-- Amir-Hossein Karimi | amirhkarimi.com Machine Learning PhD Candidate Max Planck ETH Center for Learning Systems

HeyItsBethany3 commented 3 years ago

Thanks. I don't think it's running properly yet.

I've run python batchTest.py -d adult -m lr -n zero_norm -a AR -b 0 -s 1 and am getting the error return generateARExplanations.genExp( NameError: name 'generateARExplanations' is not defined

HeyItsBethany3 commented 3 years ago

It seemed to finally work when I increased the CPU and RAM, running python batchTest.py -d credit -m lr -n one_norm -a MACE_eps_1e-2 -b 0 -s 1. I think if I avoid using actionable recourse it should all work :)

However, I can't seem to find anything in the _experiments folder? Where should I look for the counterfactual values?

Thanks so much for your help :)

giandos200 commented 1 year ago

Hi @amirhk , I would like to use one of the CF explanation methodologies in the Repo. I'm using MACE, but I see that the CF generation is model-dependent. Exist in the repo a CF generation methodology (AR, MINT) that is model agnostic or is there a way to generate an agnostic "getModelFormula" in MACE?