huggingface / transformers

šŸ¤— Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
128.82k stars 25.56k forks source link

[Bug] Modifying normalizer for pretrained tokenizers don't consistently work #31653

Open alvations opened 1 week ago

alvations commented 1 week ago

System Info

transformers==4.41.2

Who can help?

@ArthurZucker

Reproduction

From https://github.com/huggingface/tokenizers/issues/1552#issue-2348487489

from transformers import AutoTokenizer
from tokenizers.normalizers import Sequence, Replace, Prepend

tokenizer_name = "mistralai/Mistral-7B-v0.1"
old_tok = AutoTokenizer.from_pretrained(tokenizer_name)

assert old_tok.backend_tokenizer.normalizer != None

new_normalizer = Sequence(
    [Prepend('ā–'), Replace('ā–', ' '), Replace("foo", "bar"), Replace('<br>', '\n')]
)

old_tok.backend_tokenizer.normalizer = new_normalizer
new_tokenizdr_name = f"new_tokenizer-{tokenizer_name}"
old_tok.save_pretrained(new_tokenizdr_name)

old_tok = AutoTokenizer.from_pretrained(tokenizer_name)
new_tok = AutoTokenizer.from_pretrained(new_tokenizdr_name)

[out]:

>>> print(' '.join(old_tok.batch_decode(old_tok("I foo you<br>hello world")['input_ids'])))
<s> I foo you < br > hello world

>>> print(' '.join(new_tok.batch_decode(new_tok("I foo you<br>hello world")['input_ids'])))
<s>  I  bar  you 
 hello  world

The same process above won't work for "mistralai/Mistral-7B-v0.3".

But if we reinitialize with __class__ after the .from_pretrained it loads the tokenizer config correctly with the extended normalizer. https://stackoverflow.com/questions/78612251/how-do-we-add-modify-the-normalizer-in-a-pretrained-huggingface-tokenizer/78624238#78624238

Expected behavior

The same .from_pretrained should work for other model's tokenizers after changes to the normalizer.

ArthurZucker commented 1 week ago

Hey! This is not because you can't change it, but because the v3 does not have a normalizer at all.

image

This is the "legacy=False" version of the tokenizer. This should be fixed soon btw, the mistralv01 should end up without a normalizer