Closed arditobryan closed 4 years ago
Maybe I found out, is it:
for i, sample_output in enumerate(generated):
print("{}: {}".format(i, tokenizer.decode(sample_output, skip_special_tokens=True)))
?
You can also make use of tokenizer.batch_decode(...)
Hi,
I am following the instructions written on the HuggingFace website to use an encoder-decoder model:
from transformers import EncoderDecoderModel, BertTokenizer import torch
However, I have no idea how to decode the generated output, can anybody pls help? Thank you