Thank you for providing such a detailed tutorial.
I updated the preprocessing pipeline for torchtext 0.13 by replacing Field and BucketIterator with get_tokenizer and Dataloader according to the official torchtext migration guideline. Code has been tested locally. The training results differ slightly from the original one because padding token is now part of vocabulary which increases the number of parameters in the embedding layer.
I updated for the first tutorial for now. If you think it will be helpful, I will update the rest.
Thank you for providing such a detailed tutorial. I updated the preprocessing pipeline for torchtext 0.13 by replacing Field and BucketIterator with get_tokenizer and Dataloader according to the official torchtext migration guideline. Code has been tested locally. The training results differ slightly from the original one because padding token is now part of vocabulary which increases the number of parameters in the embedding layer. I updated for the first tutorial for now. If you think it will be helpful, I will update the rest.