Open nasheennur opened 5 years ago
Hi @marcotcr , I apologize, I may not be that much clear about the issue. Does LIME handle explaining categorical features for RNN? I don't think so. It is failing to index the flattened data_row for categorical features. I am pasting the screenshot here. I just added a random column(test_cata) with categorical features to your CO2 examples. The encodings are 0,1,2,3 for 't','tv','v','vt' and the timestep is 12. I am getting the following error This problem is happening when I am passing model.predict to the explain instance. But when I am passing predict_fn = lambda x: model.predict_proba(encoder1.fit_transform(x)).astype(float) the error is different. It's with the line yss = predict_fn(inverse). The inverse shape is 2D, where the _make_predict_proba function is returning 3d array.
Ah, sorry, I had misunderstood. You are right, this is a bug - categorical features
expects indexes, which get messed up when the input gets unrolled. The quick fix is mapping the categorical feature indexes to the appropriate thing (each categorical feature has to be repeated n_steps times).
I will wait for someone to do a pull request though, I don't have the time to do this right now : )
Is there an update for this function/error ? I am facing the same problem.
I am currently working on a time series database to predict stock price, and I am using Keras RNN LSTM model. Since we have to formulate to Numpy 3D array in LSTM, the tabular explanation in LIMe is not working for explaining categorical and numerical data together. I would like to use LIME to help explain the results via visualization.
I would really appreciate any response if someone already worked it out.
Thanks, Nasheen