Open yusufcakmakk opened 2 years ago
Hey @yusufcakmakk,
I stumbled across the same problem. I simply changed some of the parameters to pass max_length
to the tokenizer's encode
function. See my fork here.
Hope this helps!
Hey @e-tornike just looked at your fork, this is great, would you be interested in adding this as contribution?
Hi @cdpierse, thanks for having a look at this! I've simplified the truncation further and made a pull request.
Hi,
It is ok when I use SequenceClassificationExplainer with short texts but for long texts it throws an error like
RuntimeError: The expanded size of the tensor (583) must match the existing size (514) at non-singleton dimension 1. Target sizes: [1, 583]. Tensor sizes: [1, 514]
I think it will solve the problem if I modify or pass some parameters like
padding="max_length", truncation=True, max_length=max_length
to explainer.Do you have any suggestion for this problem? How can I solve?
Example usage:
Exception: