huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
134.61k stars 26.92k forks source link

New Model: Charformer: Fast Character Transformers via Gradient-based Subword Tokenization #12410

Open neel04 opened 3 years ago

neel04 commented 3 years ago

🌟 New model addition

Model description

arXiv = https://arxiv.org/pdf/2106.12672.pdf (pre-print; under review)

In this paper, they introduce a soft gradient-based subword tokenization module (GBST) that automatically learns latent subword representations from characters in a data-driven fashion. More importantly, is the introduction of Charformer, a deep Transformer model that integrates GBST and operates on the byte level.

Via extensive experiments on English GLUE, multilingual, and noisy text datasets, we show that Charformer outperforms a series of competitive byte-level baselines while generally performing on par and sometimes outperforming subword-based models. Additionally, Charformer is fast, improving the speed of both vanilla byte-level and subword-level Transformers by 28%-100% while maintaining competitive quality. We believe this work paves the way for highly performant token-free models that are trained completely end-to-end.

Open source status

stefan-it commented 3 years ago

Code is out now:

https://github.com/google-research/google-research/tree/master/charformer (please note the different url - compared to paper url)

mapmeld commented 3 years ago

An unofficial PyTorch implementation for Charformer https://github.com/lucidrains/charformer-pytorch

MartinXPN commented 2 years ago

Thanks for the great work! Will charformer be supported in the near future?

MilesQLi commented 2 years ago

Still not supported yet