Closed SaulLu closed 1 month ago
Hi, I would like to add tests for Longformer
tokenizer
@SaulLu I would like to add tests for Flaubert
Hey I would like to contribute for Electra
,Pointers please!
Thank you all for offering your help!
@Rajathbharadwaj ,sure! what do you need help with? Do you need more details on any of the steps listed in the main post?
Hi, first time contributor here-could I add tests for Splinter
?
Is anyone else encountering this error with the cookiecutter command? my dev environment set up seemed to have went all fine...
Also I had run the command inside the tests/splinter
directory
@faiazrahman , thank you so much for working on this! Regarding your issue, if you're in the tests/splinter
folder, can you try to run cookiecutter ../../templates/adding_a_missing_tokenization_test/
?
You should have a newly created folder cookiecutter-template-BrandNewBERT
inside tests/splinter
. :slightly_smiling_face:
If that's the case, you'll need after to do something like:
mv cookiecutter-template-BrandNewBERT/test_tokenization_brand_new_bert.py .
rm -r cookiecutter-template-BrandNewBERT/
Keep me posted :smile:
Thanks so much @SaulLu turns out it was due to not recognizing my installed cookiecutter so i sorted it out there. π
Hi @anmolsjoshi, @tgadeliya, @Rajathbharadwaj and @farahdian,
Just a quick message to see how things are going for you and if you have any problems. If you do, please share them! :hugs:
Thanks @SaulLu ! I've been exploring the tokenization test files in the repo just trying to figure out which ones would be a good basis for writing a tokenization test for splinter... if you have any guidance on this it would be super helpful!
Hey @SaulLu my apologies, been a bit busy. I'll get started ASAP, however, I still didn't understand where exactly I should run the cookie cutter
Help on this would be helpful π
Hi @farahdian ,
Thank you very much for the update! To know where you stand, have you done step 3)? Is it for step 4) that you are looking for a similar tokenizer? :slightly_smiling_face:
Hi @Rajathbharadwaj ,
Thank you for the update too!
I still didn't understand where exactly I should run the cookie cutter
You can run the cookie cutter
command anywhere as long as the command is followed by the path to the folder adding_a_missing_tokenization_test
in the transformers repo that you have cloned locally.
When you run the command, it will create a new folder at the location you are. In this folder you will find a base for the python test file that you need to move inside the tests/electra
folder of the transformers local clone. Once this file is moved, you can delete the folder that was created by the cookie cutter command.
Below is an example of the sequence of bash commands I would personally use:
(base) username@hostname:~$ cd ~/repos
(base) username@hostname:~/repos$ git clone git@github.com:huggingface/transformers.git
[Install my development setup]
(transformers-dev) username@hostname:~/repos$ cookiecutter transformers/templates/adding_a_missing_tokenization_test/
[Answer the questionnaire]
(transformers-dev) username@hostname:~/repos$ mv cookiecutter-template-Electra/test_tokenization_electra.py transformers/tests/Electra
(transformers-dev) username@hostname:~/repos$ rm -r cookiecutter-template-Electra/
Hope that'll help you :smile:
Appreciate your patience @SaulLu ! Yup I've done step 3 and generated a test tokenization file with cookiecutter. Now onto working on the setUp method π
@farahdian , this is indeed a very good question: finding the closest tokenizer to draw inspiration from and identifying the important difference with that tokenizer is the most interesting part.
For that there are several ways to start:
PRETRAINED_VOCAB_FILES_MAP
global variables in the tokenizer's files) on the hub. A similar model would most likely store the tokenizer vocabulary in the same way (with only a vocab
file, with both a vocab
and a merges
files, with a sentencepiece
binary file or with only a tokenizer.json
file).For the model you're in charge @farahdian:
Transformers's doc mention that:
Use SplinterTokenizer (rather than BertTokenizer), as it already contains this special token. Also, its default behavior is to use this token when two sequences are given (for example, in the run_qa.py script).
Splinter's paper mention that:
Splinter-base shares the same architecture (transformer encoder (Vaswani et al., 2017)), vocabulary (cased wordpieces), and number of parameters (110M) with SpanBERT-base (Joshi et al., 2020).
And SpanBERT's paper mention that:
We reimplemented BERTβs model and pre-training method in fairseq (Ott et al., 2019). We used the model configuration of BERT large as in Devlin et al. (2019) and also pre-trained all our models on the same corpus: BooksCorpus and English Wikipedia using cased Wordpiece tokens.
bert-base-cased
(vocab file) and of splinter-base
(vocab file) look very similarGiven these mentions, it seems that Splinter's tokenizer is very similar to Bert's one. It would be interesting to confirm this impression and to understand all the differences between SplinterTokenizer and BertTokenizer so that it is well reflected in the test :slightly_smiling_face:
Hi @Rajathbharadwaj ,
Thank you for the update too!
I still didn't understand where exactly I should run the cookie cutter
You can run the
cookie cutter
command anywhere as long as the command is followed by the path to the folderadding_a_missing_tokenization_test
in the transformers repo that you have cloned locally.When you run the command, it will create a new folder at the location you are. In this folder you will find a base for the python test file that you need to move inside the
tests/electra
folder of the transformers local clone. Once this file is moved, you can delete the folder that was created by the cookie cutter command.Below is an example of the sequence of bash commands I would personally use:
(base) username@hostname:~$ cd ~/repos (base) username@hostname:~/repos$ git clone git@github.com:huggingface/transformers.git [Install my development setup] (transformers-dev) username@hostname:~/repos$ cookiecutter transformers/templates/adding_a_missing_tokenization_test/ [Answer the questionnaire] (transformers-dev) username@hostname:~/repos$ mv cookiecutter-template-Electra/test_tokenization_electra.py transformers/tests/Electra (transformers-dev) username@hostname:~/repos$ rm -r cookiecutter-template-Electra/
Hope that'll help you smile
Thank you so much @SaulLu
I understood now, however, I am skeptical about slow_tokenizer_use_sentencepiece
question, but I set it to True as it had the tokenization_electra.py
file but I didn't understand
"Set true if the tokenizer defined in the tokenization_brand_new_bert.py file uses sentencepiece"
So did I select correctly? Or should I set it to False? Apologies for asking so many questions π
However now I've started adding tests for Electra
will keep you posted if I run into something I don't understand.
Thanks for helping once again!
Hi @SaulLu,
I think my case the easiest one, because Longformer
model uses actually the same tokenizer as RoBERTa
with no differences. So, I adapted tests(small refactor and changes) from RoBERTa
tokenizer and prepare branch with tests. Nevertheless, I really want to dive deeper and study code of TokenizerTesterMixin
and if after that I will find some untested behaviour, I will add new tests.
But I think I have one doubt, that you can resolve. Are you anticipating from Longformer
tests to have different toy tokenizer example than in RoBERTa
tests? Or maybe I should write my own tests from scratch?
@Rajathbharadwaj , I'm happy to help! Especially as your questions will surely be useful for other people
however, I am skeptical about slow_tokenizer_use_sentencepiece question, but I set it to True as it had the tokenization_electra.py file but I didn't understand "Set true if the tokenizer defined in the tokenization_brand_new_bert.py file uses sentencepiece" So did I select correctly? Or should I set it to False? Apologies for asking so many questions smile
Some XxxTokenizer
(without the Fast at the end, implemented in the tokenization_xxx.py
file), use a backend based on the sentencepiece library. For example T5Tokenizer
uses a backend based on sentencepiece: you can see this import at the beginning of the tokenization_t5.py
file:
https://github.com/huggingface/transformers/blob/3dd57b15c561bc26eb0cde8e6c766e7533284b0f/src/transformers/models/t5/tokenization_t5.py#L24
and you can see that the backend is instantiated here:
https://github.com/huggingface/transformers/blob/3dd57b15c561bc26eb0cde8e6c766e7533284b0f/src/transformers/models/t5/tokenization_t5.py#L151-L152
On the contrary, BertTokenizer for example does not use a sentencepiece backend.
I hope this helped you!
Hi @tgadeliya ,
Thanks for the update!
But I think I have one doubt, that you can resolve. Are you anticipating from Longformer tests to have different toy tokenizer example than in RoBERTa tests? Or maybe I should write my own tests from scratch?
In your case, I wouldn't be surprised if Longformer uses the same tokenizer as RoBERTa. In this case, it seems legitimate to use the same toy tokenizer. Maybe the only check you can do to confirm this hypothesis is comparing the vocabularies of the 'main" checkpoints of both models:
!wget https://huggingface.co/allenai/longformer-base-4096/raw/main/merges.txt
!wget https://huggingface.co/allenai/longformer-base-4096/raw/main/vocab.json
!wget https://huggingface.co/roberta-base/raw/main/merges.txt
!wget https://huggingface.co/roberta-base/raw/main/vocab.json
!diff merges.txt merges.txt.1
!diff vocab.json vocab.json.1
Turn out the result confirms it!
Hi, I'm happy to take MobileBert
I'd like to work on ConvBert.
Identify the high level features of the tokenizer by looking at the contents of the model's "reference" checkpoint files (listed inside the PRETRAINED_VOCAB_FILES_MAP global variables in the tokenizer's files) on the hub. A similar model would most likely store the tokenizer vocabulary in the same way (with only a vocab file, with both a vocab and a merges files, with a sentencepiece binary file or with only a tokenizer.json file).
@SaulLu I'm having trouble identifying ConvBert's 'reference' checkpoint files on the hub. Would you kindly provide more guidance on this?
Hi @elusenji ,
In the src/transformers/models/convbert/tokenization_convbert.py
file you can find the global variable PRETRAINED_VOCAB_FILES_MAP
:
https://github.com/huggingface/transformers/blob/6d90d76f5db344e333ec184cc6f414abe2aa6559/src/transformers/models/convbert/tokenization_convbert.py#L24-L30
In particular YituTech/conv-bert-base is a reference checkpoint for ConvBert.
Is this what you were having trouble with? :relaxed:
Yes, this helps!
Hi @SaulLu, I am happy to write tests for RemBert
. Thanks.
Hi @SaulLu, I would like to work on RetriBert
.
Hi @SaulLu, I'd be happy to work on LED
- Thanks!!
Thanks @SaulLu, I'm working on this RemBert
:)
Hello to all!
Two first PRs have been merged into master! Many thanks to @leondz and @mpoemsl! :confetti_ball:
@nnlnr, @anmolsjoshi, @tgadeliya, @Rajathbharadwaj, @farahdian, @elusenji, and @danhphan, I wanted to take this opportunity to ask you how things are going for you? Are you experiencing any particular difficulties?
@SaulLu, I've made some progress. Would it be okay to send in a work-in-progress pull request?
^was wondering if I could do the same, could use a second pair of eyes on it
Yes sure!
Hi @SaulLu
Apologies for the delayed response - I've been making some progress with LED
and will be hopefully submitting a WIP-PR in the coming week. Thanks for following up
Hi @nnlnr, @anmolsjoshi, @Rajathbharadwaj, @elusenji and @danhphan,
I come to the news to know how the integration of the tests is going for you :hugs:
Hi @SaulLu, Can I work on Splinter if no one is working it? I believe it's not claimed yet
Hi @ashwinjohn3,
Absolutely! Don't hesitate if you are having difficulties
@SaulLu Thank you so much. Will do :)
Hi @SaulLu , sorry for late response and being quite slow. I am still working on RemBert and will try to finish it soon in the coming weeks. Thank you.
@SaulLu are there any tokenizers left???
Hi @IMvision12, I am busy on the deadline of a couple of other projects, so can you work on RemBert
? Thanks!
Yeah sure @danhphan Thanks.
Thank you @IMvision12 !
Seems like a bit late to the party π . Is there any tokenizer not listed here that I can write tests for? Or maybe if some tokenizer becomes available here. Please let me know @SaulLu I would love to contribute π
Unfortunately, I don't have much time left to help with transformers now. But let me ping @ArthurZucker for visibility
Hey @y3sar thanks for wanting to contribute. I think that the RemBert tests PR was close, you can probably take that over if you want! Other tests that might be missing:
@ArthurZucker thanks for your reply. I will start working on RemBert tests.
hey @ArthurZucker, I'm happy to have a look at contributing to a few of these. I'll start off with gpt_neox
π
Hi. Are the tests still open for contribution? Thanks
@ArthurZucker some of the claimed tokenizers are dormant. Can I take in one of them? If so, can you let me know which one.
cc: @SaulLu
Hey all! π€ If you don't find a PR open for any model feel free to do so. If a PR is inactive for quite some time, just ping the author to make sure he is alright with you taking over or if he still want to contribute ! (unless it's inactive for more than 2 months I think it's alright to work on it) ππ»
π Add missing tokenizer test files
Several tokenizers currently have no associated tests. I think that adding the test file for one of these tokenizers could be a very good way to make a first contribution to transformers.
Tokenizers concerned
not yet claimed
none
claimed
with an ongoing PR
none
with an accepted PR
How to contribute?
Claim a tokenizer
a. Choose a tokenizer from the list of "not yet claimed" tokenizers
b. Check that no one in the messages for this issue has indicated that they care about this tokenizer
c. Put a message in the issue that you are handling this tokenizer
Create a local development setup (if you have not already done it)
I refer you to section "start-contributing-pull-requests" of the Contributing guidelines where everything is explained. Don't be afraid with step 5. For this contribution you will only need to test locally the tests you add.
Follow the instructions on the readme inside the
templates/adding_a_missing_tokenization_test
folder to generate the template with cookie cutter for the new test file you will be adding. Don't forget to move the new test file at the end of the template generation to the sub-folder named after the model for which you are adding the test file in thetests
folder. Some details about questionnaire - assuming that the name of the lowcase model isbrand_new_bert
:tokenization_brand_new_bert.py
file in the foldersrc/transformers/models/brand_new_bert
tokenization_brand_new_bert_fast.py
file the foldersrc/transformers/models/brand_new_bert
.tokenization_brand_new_bert.py
file uses sentencepiece. If this tokenizer don't have a`tokenization_brand_new_bert.py
file set False.Complete the
setUp
method in the generated test file, you can take inspiration for how it is done for the other tokenizers.Try to run all the added tests. It is possible that some tests will not pass, so it will be important to understand why, sometimes the common test is not suited for a tokenizer and sometimes a tokenizer can have a bug. You can also look at what is done in similar tokenizer tests, if there are big problems or you don't know what to do we can discuss this in the PR (step 7.).
(Bonus) Try to get a good understanding of the tokenizer to add custom tests to the tokenizer
Open an PR with the new test file added, remember to fill in the RP title and message body (referencing this PR) and request a review from @LysandreJik and @SaulLu.
Tips
Do not hesitate to read the questions / answers in this issue :newspaper: