Open burui11087 opened 4 months ago
This is probably caused by e.g. frozen parameters such as DINOv2 not getting gradients and DDP being stupid. What I have done in the past is just to put those in a list or something to prevent this type of error. You can also set find_unused_parameters=True like they suggest.
Hi, I'm having some issues with the training for blendedmvs using DDP mode.
`Traceback (most recent call last): File "train.py", line 265, in
mp.spawn(main, nprocs=args.world_size, args=(args, config))
File "python3.8/site-packages/torch/multiprocessing/spawn.py", line 239, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "python3.8/site-packages/torch/multiprocessing/spawn.py", line 197, in start_processes
while not context.join():
File "python3.8/site-packages/torch/multiprocessing/spawn.py", line 160, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:
-- Process 2 terminated with the following error: Traceback (most recent call last): File "python3.8/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap fn(i, *args) File "MVSFormerPlusPlus/train.py", line 207, in main trainer.train() File "MVSFormerPlusPlus/base/base_trainer.py", line 79, in train result = self._train_epoch(epoch) File "MVSFormerPlusPlus/trainer/mvsformer_trainer.py", line 128, in _train_epoch outputs = self.model.forward(imgs_tmp, cam_params_tmp, depth_values[b_start:b_end]) File "python3.8/site-packages/torch/nn/parallel/distributed.py", line 1139, in forward if torch.is_grad_enabled() and self.reducer._rebuild_buckets(): RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument
find_unused_parameters=True
totorch.nn.parallel.DistributedDataParallel
, and by making sure allforward
function outputs participate in calculating loss. If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module'sforward
function. Please include the loss function and the structure of the return value offorward
of your module when reporting this issue (e.g. list, dict, iterable).`Has anyone had the same issue? Thanks