huggingface / tokenizers

šŸ’„ Fast State-of-the-Art Tokenizers optimized for Research and Production
https://huggingface.co/docs/tokenizers
Apache License 2.0
8.92k stars 777 forks source link

different output of AutoTokenizer from that of T5tokenizer #1463

Closed sm745052 closed 7 months ago

sm745052 commented 7 months ago

Hi, I found out that after adding a new token, say , both tokenizers behave differently.

from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("t5-base")
tokenizer.add_tokens("<tk>")
text = 'hello<tk>'
encoded = tokenizer.encode(text, return_tensors='pt')
decoded_text = tokenizer.decode(encoded[0])
print("Decoded text:", decoded_text)

gives

Decoded text: hello<tk></s>

where as

from transformers import T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained("t5-base")
tokenizer.add_tokens("<tk>")
text = 'hello<tk>'
encoded = tokenizer.encode(text, return_tensors='pt')
decoded_text = tokenizer.decode(encoded[0])
print("Decoded text:", decoded_text)

gives

Decoded text: hello <tk> </s>
ArthurZucker commented 7 months ago

Hey, you should see the following warning:

You are using the default legacy behaviour of the <class 'transformers.models.t5.tokenization_t5.T5Tokenizer'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.

tokenizer = T5Tokenizer.from_pretrained("t5-base", legacy = False) should be used. also:

- decoded_text = tokenizer.decode(encoded[0])
+ decoded_text = tokenizer.decode(encoded[0],spaces_between_special_tokens = False`)

closing as this is related to transformers not tokenizers