Hi, I am considering trying to refactor this repo to support distributed training with DistributedDataParallel (or maybe Horovod). Do you happen to foresee any major issues with that working?
@austinmw I will add DD training myself if i have plenty free time. If you can do this, I would be very grateful! Then, a branch merging can be done to make MCMOT better for training.
Hi, I am considering trying to refactor this repo to support distributed training with DistributedDataParallel (or maybe Horovod). Do you happen to foresee any major issues with that working?