jcyk / copyisallyouneed

Code for our ACL2021 paper Neural Machine Translation with Monolingual Translation Memory
82 stars 12 forks source link

A bug regarding distributed training #10

Closed ringos closed 3 years ago

ringos commented 3 years ago

In train.py, line 203:

If only a single GPU is used, calling dist.barrier() would cause an error because the distributed processes have not been initialized. Adding if args.world_size != 1: before line 203 would tackle that error.