Closed RyanQR closed 1 year ago
I also have the same question, thank you!
We only trained MIC on a single GPU and did not use distributed training.
There's distributed option(args.launcher) in the tools/train.py line106. But your code is not providing distributed training?
I noticed that your project only uses one GPU. Can your code be trained distributed?