Open angiend opened 2 years ago
I pushed a fix in https://github.com/cvg/pixloc/commit/002c1987d387558ecb4ac53a120973fdf258ce8b Can you please let me know if it solves the issue?
sorry to bother you again.@Skydes
i try your method ,change the code as your show.
but get an error ,
.......
Process 1 terminated with the following error:
Traceback (most recent call last):
File "/home/zhoulw/.local/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap
fn(i, args)
File "/home/zhoulw/pixloc/pixloc/pixlib/train.py", line 356, in main_worker
training(rank, conf, output_dir, args)
File "/home/zhoulw/pixloc/pixloc/pixlib/train.py", line 266, in training
loss.backward()
File "/home/zhoulw/.local/lib/python3.8/site-packages/torch/_tensor.py", line 255, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/home/zhoulw/.local/lib/python3.8/site-packages/torch/autograd/init.py", line 147, in backward
Variable._execution_engine.run_backward(
File "/home/zhoulw/.local/lib/python3.8/site-packages/torch/autograd/function.py", line 87, in apply
return self._forward_cls.backward(self, args) # type: ignore[attr-defined]
File "/home/zhoulw/.local/lib/python3.8/site-packages/torch/utils/checkpoint.py", line 138, in backward
torch.autograd.backward(outputs_with_grad, args_with_grad)
File "/home/zhoulw/.local/lib/python3.8/site-packages/torch/autograd/init.py", line 147, in backward
Variable._execution_engine.run_backward(
RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the forward
function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple checkpoint
functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations.
Parameter at index 36 has been marked as ready twice. This means that multiple autograd engine hooks have fired for this particular parameter during this iteration. You can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print parameter names for further debugging
log-distributed.txt
Apologies for the late reply. From the logs
UserWarning: Error detected in torch::autograd::AccumulateGrad. No forward pass information available. Enable detect anomaly during forward pass for more information. (Triggered internally at ../torch/csrc/autograd/python_anomaly_mode.cpp:85.)
Can you try to enable anomaly detection? Does this always happen at the first iteration?
Hi @Skydes and @angiend,
I came across the same problem when adapting multi-gpu training. I have set:
torch.autograd.set_detect_anomaly(True)
but it cannot solve the problem.
Have you solved this issue? Looking forward to your reply.
zhoulw@zhoulw-Super-Server:~/pixloc$ python3 pixloc/pixlib/train.py pixloc_cmu_reproduce --conf pixloc/pixlib/configs/train_pixloc_cmu.yaml --restore true --distributed true [11/17/2021 14:27:40 pixloc INFO] Starting experiment pixloc_cmu_reproduce [11/17/2021 14:27:41 pixloc INFO] Restoring from previous training of pixloc_cmu_reproduce [11/17/2021 14:27:41 pixloc INFO] Restoring from checkpoint checkpoint_64.tar [11/17/2021 14:27:41 pixloc INFO] Restoring from previous training of pixloc_cmu_reproduce [11/17/2021 14:27:41 pixloc INFO] Restoring from checkpoint checkpoint_64.tar [11/17/2021 14:27:41 pixloc INFO] Training in distributed mode with 2 GPUs Traceback (most recent call last): File "pixloc/pixlib/train.py", line 384, in
torch.multiprocessing.spawn(
File "/home/zhoulw/.local/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/home/zhoulw/.local/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes
while not context.join():
File "/home/zhoulw/.local/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 150, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:
-- Process 1 terminated with the following error: Traceback (most recent call last): File "/home/zhoulw/.local/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap fn(i, *args) File "/home/zhoulw/pixloc/pixloc/pixlib/train.py", line 357, in main_worker training(rank, conf, output_dir, args) File "/home/zhoulw/pixloc/pixloc/pixlib/train.py", line 152, in training assert not Path(lock).exists(), lock AssertionError: /home/zhoulw/pixloc/distributed_lock_0
when retrain the model by two GPUs , get the error like above , would you give some advice ?thank you