huggingface / transformers

πŸ€— Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
134.33k stars 26.86k forks source link

Add missing tokenizer test files [:building_construction: in progress] #16627

Closed SaulLu closed 1 month ago

SaulLu commented 2 years ago

πŸš€ Add missing tokenizer test files

Several tokenizers currently have no associated tests. I think that adding the test file for one of these tokenizers could be a very good way to make a first contribution to transformers.

Tokenizers concerned

not yet claimed

none

claimed

with an ongoing PR

none

with an accepted PR

How to contribute?

  1. Claim a tokenizer

    a. Choose a tokenizer from the list of "not yet claimed" tokenizers

    b. Check that no one in the messages for this issue has indicated that they care about this tokenizer

    c. Put a message in the issue that you are handling this tokenizer

  2. Create a local development setup (if you have not already done it)

    I refer you to section "start-contributing-pull-requests" of the Contributing guidelines where everything is explained. Don't be afraid with step 5. For this contribution you will only need to test locally the tests you add.

  3. Follow the instructions on the readme inside the templates/adding_a_missing_tokenization_test folder to generate the template with cookie cutter for the new test file you will be adding. Don't forget to move the new test file at the end of the template generation to the sub-folder named after the model for which you are adding the test file in the tests folder. Some details about questionnaire - assuming that the name of the lowcase model is brand_new_bert:

    • "has_slow_class": Set true there is a tokenization_brand_new_bert.py file in the folder src/transformers/models/brand_new_bert
    • "has_fast_class": Set true there is a tokenization_brand_new_bert_fast.py file the folder src/transformers/models/brand_new_bert.
    • "slow_tokenizer_use_sentencepiece": Set true if the tokenizer defined in the tokenization_brand_new_bert.py file uses sentencepiece. If this tokenizer don't have a `tokenization_brand_new_bert.py file set False.
  4. Complete the setUp method in the generated test file, you can take inspiration for how it is done for the other tokenizers.

  5. Try to run all the added tests. It is possible that some tests will not pass, so it will be important to understand why, sometimes the common test is not suited for a tokenizer and sometimes a tokenizer can have a bug. You can also look at what is done in similar tokenizer tests, if there are big problems or you don't know what to do we can discuss this in the PR (step 7.).

  6. (Bonus) Try to get a good understanding of the tokenizer to add custom tests to the tokenizer

  7. Open an PR with the new test file added, remember to fill in the RP title and message body (referencing this PR) and request a review from @LysandreJik and @SaulLu.

Tips

Do not hesitate to read the questions / answers in this issue :newspaper:

tgadeliya commented 2 years ago

Hi, I would like to add tests for Longformer tokenizer

anmolsjoshi commented 2 years ago

@SaulLu I would like to add tests for Flaubert

Rajathbharadwaj commented 2 years ago

Hey I would like to contribute for Electra,Pointers please!

SaulLu commented 2 years ago

Thank you all for offering your help!

@Rajathbharadwaj ,sure! what do you need help with? Do you need more details on any of the steps listed in the main post?

farahdian commented 2 years ago

Hi, first time contributor here-could I add tests for Splinter?

farahdian commented 2 years ago

Is anyone else encountering this error with the cookiecutter command? my dev environment set up seemed to have went all fine... Also I had run the command inside the tests/splinter directory

Screenshot 2022-04-11 172638

SaulLu commented 2 years ago

@faiazrahman , thank you so much for working on this! Regarding your issue, if you're in the tests/splinter folder, can you try to run cookiecutter ../../templates/adding_a_missing_tokenization_test/ ?

You should have a newly created folder cookiecutter-template-BrandNewBERT inside tests/splinter. :slightly_smiling_face:

If that's the case, you'll need after to do something like:

mv cookiecutter-template-BrandNewBERT/test_tokenization_brand_new_bert.py .
rm -r cookiecutter-template-BrandNewBERT/

Keep me posted :smile:

farahdian commented 2 years ago

Thanks so much @SaulLu turns out it was due to not recognizing my installed cookiecutter so i sorted it out there. πŸ‘

SaulLu commented 2 years ago

Hi @anmolsjoshi, @tgadeliya, @Rajathbharadwaj and @farahdian,

Just a quick message to see how things are going for you and if you have any problems. If you do, please share them! :hugs:

farahdian commented 2 years ago

Thanks @SaulLu ! I've been exploring the tokenization test files in the repo just trying to figure out which ones would be a good basis for writing a tokenization test for splinter... if you have any guidance on this it would be super helpful!

Rajathbharadwaj commented 2 years ago

Hey @SaulLu my apologies, been a bit busy. I'll get started ASAP, however, I still didn't understand where exactly I should run the cookie cutter

Help on this would be helpful πŸ˜„

SaulLu commented 2 years ago

Hi @farahdian ,

Thank you very much for the update! To know where you stand, have you done step 3)? Is it for step 4) that you are looking for a similar tokenizer? :slightly_smiling_face:

SaulLu commented 2 years ago

Hi @Rajathbharadwaj ,

Thank you for the update too!

I still didn't understand where exactly I should run the cookie cutter

You can run the cookie cutter command anywhere as long as the command is followed by the path to the folder adding_a_missing_tokenization_test in the transformers repo that you have cloned locally.

When you run the command, it will create a new folder at the location you are. In this folder you will find a base for the python test file that you need to move inside the tests/electra folder of the transformers local clone. Once this file is moved, you can delete the folder that was created by the cookie cutter command.

Below is an example of the sequence of bash commands I would personally use:

(base) username@hostname:~$ cd ~/repos
(base) username@hostname:~/repos$ git clone git@github.com:huggingface/transformers.git
[Install my development setup]
(transformers-dev) username@hostname:~/repos$ cookiecutter transformers/templates/adding_a_missing_tokenization_test/
[Answer the questionnaire]
(transformers-dev) username@hostname:~/repos$ mv cookiecutter-template-Electra/test_tokenization_electra.py transformers/tests/Electra
(transformers-dev) username@hostname:~/repos$ rm -r cookiecutter-template-Electra/

Hope that'll help you :smile:

farahdian commented 2 years ago

Appreciate your patience @SaulLu ! Yup I've done step 3 and generated a test tokenization file with cookiecutter. Now onto working on the setUp method πŸ˜„

SaulLu commented 2 years ago

@farahdian , this is indeed a very good question: finding the closest tokenizer to draw inspiration from and identifying the important difference with that tokenizer is the most interesting part.

For that there are several ways to start:

  1. Identify the high level features of the tokenizer by looking at the contents of the model's "reference" checkpoint files (listed inside the PRETRAINED_VOCAB_FILES_MAP global variables in the tokenizer's files) on the hub. A similar model would most likely store the tokenizer vocabulary in the same way (with only a vocab file, with both a vocab and a merges files, with a sentencepiece binary file or with only a tokenizer.json file).
  2. Read the high level explanation of the model in the transformers documentation (e.g. for Splinter)
  3. Read the paper corresponding to the model
  4. Look at the implementation in transformers lib
  5. Look at the original implementation of the model (often mentioned in the paper)
  6. Look at the discussions on the PR in which the model was added

For the model you're in charge @farahdian:

Given these mentions, it seems that Splinter's tokenizer is very similar to Bert's one. It would be interesting to confirm this impression and to understand all the differences between SplinterTokenizer and BertTokenizer so that it is well reflected in the test :slightly_smiling_face:

Rajathbharadwaj commented 2 years ago

Hi @Rajathbharadwaj ,

Thank you for the update too!

I still didn't understand where exactly I should run the cookie cutter

You can run the cookie cutter command anywhere as long as the command is followed by the path to the folder adding_a_missing_tokenization_test in the transformers repo that you have cloned locally.

When you run the command, it will create a new folder at the location you are. In this folder you will find a base for the python test file that you need to move inside the tests/electra folder of the transformers local clone. Once this file is moved, you can delete the folder that was created by the cookie cutter command.

Below is an example of the sequence of bash commands I would personally use:

(base) username@hostname:~$ cd ~/repos
(base) username@hostname:~/repos$ git clone git@github.com:huggingface/transformers.git
[Install my development setup]
(transformers-dev) username@hostname:~/repos$ cookiecutter transformers/templates/adding_a_missing_tokenization_test/
[Answer the questionnaire]
(transformers-dev) username@hostname:~/repos$ mv cookiecutter-template-Electra/test_tokenization_electra.py transformers/tests/Electra
(transformers-dev) username@hostname:~/repos$ rm -r cookiecutter-template-Electra/

Hope that'll help you smile

Thank you so much @SaulLu I understood now, however, I am skeptical about slow_tokenizer_use_sentencepiece question, but I set it to True as it had the tokenization_electra.py file but I didn't understand

"Set true if the tokenizer defined in the tokenization_brand_new_bert.py file uses sentencepiece"

So did I select correctly? Or should I set it to False? Apologies for asking so many questions πŸ˜„

However now I've started adding tests for Electra will keep you posted if I run into something I don't understand.

Thanks for helping once again!

tgadeliya commented 2 years ago

Hi @SaulLu, I think my case the easiest one, because Longformer model uses actually the same tokenizer as RoBERTa with no differences. So, I adapted tests(small refactor and changes) from RoBERTa tokenizer and prepare branch with tests. Nevertheless, I really want to dive deeper and study code of TokenizerTesterMixin and if after that I will find some untested behaviour, I will add new tests. But I think I have one doubt, that you can resolve. Are you anticipating from Longformer tests to have different toy tokenizer example than in RoBERTa tests? Or maybe I should write my own tests from scratch?

SaulLu commented 2 years ago

@Rajathbharadwaj , I'm happy to help! Especially as your questions will surely be useful for other people

however, I am skeptical about slow_tokenizer_use_sentencepiece question, but I set it to True as it had the tokenization_electra.py file but I didn't understand "Set true if the tokenizer defined in the tokenization_brand_new_bert.py file uses sentencepiece" So did I select correctly? Or should I set it to False? Apologies for asking so many questions smile

Some XxxTokenizer (without the Fast at the end, implemented in the tokenization_xxx.py file), use a backend based on the sentencepiece library. For example T5Tokenizer uses a backend based on sentencepiece: you can see this import at the beginning of the tokenization_t5.py file: https://github.com/huggingface/transformers/blob/3dd57b15c561bc26eb0cde8e6c766e7533284b0f/src/transformers/models/t5/tokenization_t5.py#L24 and you can see that the backend is instantiated here: https://github.com/huggingface/transformers/blob/3dd57b15c561bc26eb0cde8e6c766e7533284b0f/src/transformers/models/t5/tokenization_t5.py#L151-L152

On the contrary, BertTokenizer for example does not use a sentencepiece backend.

I hope this helped you!

SaulLu commented 2 years ago

Hi @tgadeliya ,

Thanks for the update!

But I think I have one doubt, that you can resolve. Are you anticipating from Longformer tests to have different toy tokenizer example than in RoBERTa tests? Or maybe I should write my own tests from scratch?

In your case, I wouldn't be surprised if Longformer uses the same tokenizer as RoBERTa. In this case, it seems legitimate to use the same toy tokenizer. Maybe the only check you can do to confirm this hypothesis is comparing the vocabularies of the 'main" checkpoints of both models:

!wget https://huggingface.co/allenai/longformer-base-4096/raw/main/merges.txt
!wget https://huggingface.co/allenai/longformer-base-4096/raw/main/vocab.json
!wget https://huggingface.co/roberta-base/raw/main/merges.txt
!wget https://huggingface.co/roberta-base/raw/main/vocab.json

!diff merges.txt merges.txt.1
!diff vocab.json vocab.json.1

Turn out the result confirms it!

leondz commented 2 years ago

Hi, I'm happy to take MobileBert

elusenji commented 2 years ago

I'd like to work on ConvBert.

elusenji commented 2 years ago

Identify the high level features of the tokenizer by looking at the contents of the model's "reference" checkpoint files (listed inside the PRETRAINED_VOCAB_FILES_MAP global variables in the tokenizer's files) on the hub. A similar model would most likely store the tokenizer vocabulary in the same way (with only a vocab file, with both a vocab and a merges files, with a sentencepiece binary file or with only a tokenizer.json file).

@SaulLu I'm having trouble identifying ConvBert's 'reference' checkpoint files on the hub. Would you kindly provide more guidance on this?

SaulLu commented 2 years ago

Hi @elusenji ,

In the src/transformers/models/convbert/tokenization_convbert.py file you can find the global variable PRETRAINED_VOCAB_FILES_MAP: https://github.com/huggingface/transformers/blob/6d90d76f5db344e333ec184cc6f414abe2aa6559/src/transformers/models/convbert/tokenization_convbert.py#L24-L30

In particular YituTech/conv-bert-base is a reference checkpoint for ConvBert.

Is this what you were having trouble with? :relaxed:

elusenji commented 2 years ago

Yes, this helps!

danhphan commented 2 years ago

Hi @SaulLu, I am happy to write tests for RemBert. Thanks.

mpoemsl commented 2 years ago

Hi @SaulLu, I would like to work on RetriBert.

nnlnr commented 2 years ago

Hi @SaulLu, I'd be happy to work on LED - Thanks!!

danhphan commented 2 years ago

Thanks @SaulLu, I'm working on this RemBert :)

SaulLu commented 2 years ago

Hello to all!

Two first PRs have been merged into master! Many thanks to @leondz and @mpoemsl! :confetti_ball:

@nnlnr, @anmolsjoshi, @tgadeliya, @Rajathbharadwaj, @farahdian, @elusenji, and @danhphan, I wanted to take this opportunity to ask you how things are going for you? Are you experiencing any particular difficulties?

elusenji commented 2 years ago

@SaulLu, I've made some progress. Would it be okay to send in a work-in-progress pull request?

farahdian commented 2 years ago

^was wondering if I could do the same, could use a second pair of eyes on it

SaulLu commented 2 years ago

Yes sure!

nnlnr commented 2 years ago

Hi @SaulLu Apologies for the delayed response - I've been making some progress with LED and will be hopefully submitting a WIP-PR in the coming week. Thanks for following up

SaulLu commented 2 years ago

Hi @nnlnr, @anmolsjoshi, @Rajathbharadwaj, @elusenji and @danhphan,

I come to the news to know how the integration of the tests is going for you :hugs:

ashwinjohn3 commented 2 years ago

Hi @SaulLu, Can I work on Splinter if no one is working it? I believe it's not claimed yet

SaulLu commented 2 years ago

Hi @ashwinjohn3,

Absolutely! Don't hesitate if you are having difficulties

ashwinjohn3 commented 2 years ago

@SaulLu Thank you so much. Will do :)

danhphan commented 2 years ago

Hi @SaulLu , sorry for late response and being quite slow. I am still working on RemBert and will try to finish it soon in the coming weeks. Thank you.

IMvision12 commented 2 years ago

@SaulLu are there any tokenizers left???

danhphan commented 2 years ago

Hi @IMvision12, I am busy on the deadline of a couple of other projects, so can you work on RemBert? Thanks!

IMvision12 commented 2 years ago

Yeah sure @danhphan Thanks.

danhphan commented 2 years ago

Thank you @IMvision12 !

y3sar commented 1 year ago

Seems like a bit late to the party πŸ˜…. Is there any tokenizer not listed here that I can write tests for? Or maybe if some tokenizer becomes available here. Please let me know @SaulLu I would love to contribute πŸ˜€

SaulLu commented 1 year ago

Unfortunately, I don't have much time left to help with transformers now. But let me ping @ArthurZucker for visibility

ArthurZucker commented 1 year ago

Hey @y3sar thanks for wanting to contribute. I think that the RemBert tests PR was close, you can probably take that over if you want! Other tests that might be missing:

y3sar commented 1 year ago

@ArthurZucker thanks for your reply. I will start working on RemBert tests.

rchan26 commented 1 year ago

hey @ArthurZucker, I'm happy to have a look at contributing to a few of these. I'll start off with gpt_neox πŸ™‚

ENate commented 11 months ago

Hi. Are the tests still open for contribution? Thanks

nileshkokane01 commented 11 months ago

@ArthurZucker some of the claimed tokenizers are dormant. Can I take in one of them? If so, can you let me know which one.

cc: @SaulLu

ArthurZucker commented 11 months ago

Hey all! πŸ€— If you don't find a PR open for any model feel free to do so. If a PR is inactive for quite some time, just ping the author to make sure he is alright with you taking over or if he still want to contribute ! (unless it's inactive for more than 2 months I think it's alright to work on it) πŸ‘πŸ»