j-min / VL-T5

PyTorch code for "Unifying Vision-and-Language Tasks via Text Generation" (ICML 2021)
https://arxiv.org/abs/2102.02779
MIT License
360 stars 57 forks source link

a bug of VLT5TokenizerFast #21

Open Neo-Zhangjiajie opened 2 years ago

Neo-Zhangjiajie commented 2 years ago

When I use VLT5TokenizerFast to encode the sentence, there will be a token id 3 ( '▁') before id of token . For example,

from lib2to3.pgen2 import token
from tokenization import VLT5Tokenizer, VLT5TokenizerFast
from transformers import T5Tokenizer, BartTokenizer, T5TokenizerFast, BartTokenizerFast
from copy import deepcopy
import torch

tokenizer = VLT5TokenizerFast.from_pretrained(
            't5-base',
            max_length=20,
            do_lower_case=False,
            )

text = "I <extra_id_0> you."
input_ids = tokenizer.encode(text)
decoded_text = tokenizer.decode(input_ids)
print(text)
print(input_ids)
print(decoded_text)
print(tokenizer.convert_ids_to_tokens([3]))

(base) zhangjiajie@node2:~/VL-T5-Incontext/VL-T5-Incontext/src$ python test.py
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization. 
The tokenizer class you load from this checkpoint is 'T5Tokenizer'. 
The class this function is called from is 'VLT5TokenizerFast'.
I <extra_id_0> you.
[27, 3, 32099, 25, 5, 1]
I <extra_id_0> you.</s>
['▁']

`

If I just use T5tokenizerFast, it is ok, and the output is

(base) zhangjiajie@node2:~/VL-T5-Incontext/VL-T5-Incontext/src$ python test.py
I <extra_id_0> you.
[27, 32099, 25, 5, 1]
I<extra_id_0> you.</s>
['▁']

Is there any solution? Thanks!

j-min commented 2 years ago

Could you please check the version of the transformers package? With transformers=4.2.1 (mentioned in requirements.txt), both tokenizers yield the same results:

I <extra_id_0> you.
[27, 32099, 25, 5, 1]
I<extra_id_0> you.</s>
['▁']