The repo (config) has an option for distributed training. However, if you follow what happens to args.distributed there is nothing. Do you know when this will be implemented?
sorry for the confusion. we didn't compute distributed training for this project and please ignore it (this args accidentally copied from my previous workflows).
Great work on the repo btw!
The repo (config) has an option for distributed training. However, if you follow what happens to
args.distributed
there is nothing. Do you know when this will be implemented?