cdpierse / transformers-interpret

Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Apache License 2.0
1.29k stars 97 forks source link

How to use transformers-interpret for sequencelabelling, for example layoutlmv3 or v3 #104

Open deepanshudashora opened 2 years ago

deepanshudashora commented 2 years ago

I was testing it on layoutlmv3 and I am facing one error

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
[<ipython-input-47-f0c042620a72>](https://localhost:8080/#) in <module>
----> 1 word_attributions = ner_explainer(Image.open("/content/receipt_00073.png").convert("RGB"), ignored_labels=['O'])

3 frames
[/usr/lib/python3.7/re.py](https://localhost:8080/#) in sub(pattern, repl, string, count, flags)
    192     a callable, it's passed the Match object and must return
    193     a replacement string to be used."""
--> 194     return _compile(pattern, flags).sub(repl, string, count)
    195 
    196 def subn(pattern, repl, string, count=0, flags=0):

TypeError: expected string or bytes-like object

The code I am using is

from transformers_interpret import TokenClassificationExplainer
cls_explainer = ner_explainer = TokenClassificationExplainer(
    model,
    processor.tokenizer,
)
word_attributions = ner_explainer(Image.open("/content/receipt_00073.png").convert("RGB"), ignored_labels=['O'])
SuryaThiru commented 1 year ago

Hi, I have a similar use case with LayoutLMv3ForTokenClassification and LayoutLMv3Processor. Would it be possible to intepret these models for token classification for datasets like SROIE?