huggingface / tokenizers

💥 Fast State-of-the-Art Tokenizers optimized for Research and Production
https://huggingface.co/docs/tokenizers
Apache License 2.0
8.68k stars 745 forks source link

Assign `<unusedXX>` tokens with `special_tokens` without growing vocab size #1473

Open jacobwjs opened 3 months ago

jacobwjs commented 3 months ago

` I'm trying to modifygoogle/gemma-7b` tokenizer for instruction tuning purposes. My goal is to replace some of the "unused" tokens that were specifically added to the tokenizer for my own defined "custom" tokens. I want these custom tokens to be treated as "special" (i.e. not normalized, stripped, etc.), however this seems like an impossible task.

What I would like to do is some version of the following,

tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")

custom_tokens = ['<|im_start|>', '<|im_end|>']
unused_tokens = ['<unused1>', '<unused2>']

tokenizer.add_special_tokens({'additional_special_tokens': custom_tokens, 'tokens_to_replace': unused_tokens})

Given that many models/tokenizers being open-sourced specifically reserve some set of unused tokens for this purpose, I would like to make use of them without growing the vocabulary, and subsequently not having to adjust the model's embedding dimensions.

I've tried manually manipulating the vocab, and assigning appropriate dicts on the forward and reverse pass (encoder, decoder), but nothing seems to work.

How can I achieve my goals of making use of unused tokens, ensuring they are treated as "special", and not growing the vocabulary of the tokenizer and model embedding?

ArthurZucker commented 3 months ago

That is something we should do indeed

jacobwjs commented 3 months ago

Beautiful. That would mostly resolve another issue as well https://github.com/huggingface/trl/issues/1412#issue-2177631978

github-actions[bot] commented 2 months ago

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

ArthurZucker commented 3 weeks ago

I'll take this one on in a bit!