Hello! I'm using this model architecture to do NER on domain specific tasks and it worked pretty well! However the old version of transformer is still a little bit troublesome.
For example, in order to be close to the pretraining process of BART, I want to directly encode the whole sentence using the tokenizer, rather than split it by space and then using 'add_prefix_space=True'. So I tried the 'span' method successfully. But for 'word' method it will need extra work to do that because of the old version of tokenizer.
Is there any plan to release a transformer 4.0.0 (or above) version?
Hello! I'm using this model architecture to do NER on domain specific tasks and it worked pretty well! However the old version of transformer is still a little bit troublesome.
For example, in order to be close to the pretraining process of BART, I want to directly encode the whole sentence using the tokenizer, rather than split it by space and then using 'add_prefix_space=True'. So I tried the 'span' method successfully. But for 'word' method it will need extra work to do that because of the old version of tokenizer.
Is there any plan to release a transformer 4.0.0 (or above) version?