interpretml / interpret

Fit interpretable models. Explain blackbox machine learning.
https://interpret.ml/docs
MIT License
6.28k stars 729 forks source link

Deep models #262

Open T-nfd opened 3 years ago

T-nfd commented 3 years ago

Hi Sirs,

I am a new researcher in this field. I want to use interpretml in a deep model. Except using for visualization, I need to understand the source code logic. I mean I need found what this explainer read from a deep model when it interprets the model.

Can you please help me with which source code I can understand this explainer behaviour and what it examines from a deep model?

interpret-ml commented 3 years ago

Hi @Nima-pw,

Glad to hear about your interest! Could you elaborate a bit on your question for us? You can find our source code for all explainers here: https://github.com/interpretml/interpret/tree/develop/python/interpret-core/interpret in the "glassbox" and "blackbox" folders.

You may also get some insights from the Algorithms section of our documentation: https://interpret.ml/docs/ebm.html

-InterpretML Team

T-nfd commented 3 years ago

Hi Thanks for your response. I want to know how layers, logs, parameters, etc. of a deep model are examined for interpretability?

T-nfd commented 3 years ago

I hope my question be clear and you can help me.