Closed Aintky2000 closed 10 months ago
In train.py, line 973 & 982, maybe you can add "if args.distributed" in case that you run the code with single GPU, or there will be a bug.
Hi! Sorry, I have never ran the code without the distributed mode. If we only use one gpu, the value of torch.distributed.get_world_size() maybe 1?
In train.py, line 973 & 982, maybe you can add "if args.distributed" in case that you run the code with single GPU, or there will be a bug.