thomasp85 / lime

Local Interpretable Model-Agnostic Explanations (R port of original Python package)
https://lime.data-imaginist.com/
Other
486 stars 110 forks source link

Suggestion regarding plotting cases #113

Open hrampadarath opened 6 years ago

hrampadarath commented 6 years ago

Hi,

first thanks for creating this function/algorithm. It's been helping me understand these "black box" models. I initially ran into problems using the plot_features function. The issue was that the bars were not being viewd as well as the features (y-axis) are being squished such that they overlap. After copying and editing the plot.R function I noticed the issue was that plot_features is trying to plot too many cases. So adding the line explanation <- explanation[which(explanation$case %in% cases),] just before if (explanation$model_type[1] == 'classification'){...} and adding an option in the function call for the cases (cases = c("1","2")) allowed me to not only to view the plot in all it's informative glory, but give me the flexibility to investigate specific cases, which I personally find more useful. Thought I'd share this in the event anyone else is running into a similar problem.

Cheers, Hayden.

thomasp85 commented 6 years ago

In general the idea is that you should filter your explanations yourself before plotting them, but I may be inclined to agree that having the function within the plot function would be nice