uchicago-computation-workshop / adam_bonica

Repository for Adam Bonica's presentation at the CSS Workshop (1/24/2019)
0 stars 0 forks source link

Local Interpretable Model-Agnostic (LIME) for feature importance for individual predictions #24

Open AlexanderTyan opened 5 years ago

AlexanderTyan commented 5 years ago

I wonder if you have considered the lime package, which was a sort of a buzz in the machine learning interpretability sphere. It seems you are using feature importance for after training the Random Forest, which gives some insight, as you mention. However, I would imagine that LIME may help with interpreting reasons for individual predicted cases when you use your RF for forecasting and may offer further insight for the mapping process.

Relatedly, what was the reasoning behind using Partial Dependence (as opposed to other interpretability methods)?

w4rner commented 5 years ago

@AlexanderTyan ever managed to implement?