Open SuYanqi opened 2 years ago
As far as I can see the classifier_fn
gets all num_samples
passed, thus in your case 100 samples end up as a single batch. You can manually change your baseline classifier_fn
(assuming you have something like the predictor()
function as shown here) so it is using those texts with a batch size of your choice instead and return the (manually) concatenated outputs.
i tried LIME for layoutLM for 13 classes and have the same issue bert and layoutLM have similar architecture in the sense they use transformers, i cannot get more than a single word explanation, a single explanation takes 10 gb RAM
Hi,
LIME is great and easy to use, but I met a problem when I use LIME to explain a text classification (186 classes) by Bert classifier, it can only run a couple of test entries and then be killed.
I have already set the "num_features=10, num_samples=100, top_labels=10". This problem still happens.
Besides, I find it needs lots of memory. Is it any way to save some memory?
Looking forward to your reply. Thanks