SHI-Labs / Compact-Transformers

Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)
https://arxiv.org/abs/2104.05704
Apache License 2.0
495 stars 77 forks source link

Fixed text tokenizer mask shape #60

Open HosseinZaredar opened 2 years ago

HosseinZaredar commented 2 years ago

Hi,

There was a small problem with the mask returned from TextTokenizer forward function. The next function using this mask needs a 2D tensor. Therefore, in TextTokenizer, the mask should not be unsqueezed before being returned.

The problem is fixed in this pull request.