Hello author, I encountered this error while training UFF. May I know how to resolve it, thank you!!
root@autodl-container-b9e411b050-3a77f4bb:~/autodl-tmp/M3DM# OMP_NUM_THREADS=1 python3 -m torch.distributed.launch --nproc_per_node=1 fusion_pretrain.py --accum_iter 16 --lr 0.003 --batch_size 16 --data_path datasets/patch_lib
/root/miniconda3/lib/python3.8/site-packages/torch/distributed/launch.py:178: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use_env is set by default in torchrun.
If your script expects --local_rank argument to be set, please
change it to read from os.environ['LOCAL_RANK'] instead. See
https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions
warnings.warn(
| distributed init (rank 0): env://, gpu 0
[10:57:46.999771] job dir: /root/autodl-tmp/M3DM
[10:57:46.999924] Namespace(accum_iter=16,
batch_size=16,
blr=0.002,
data_path='datasets/patch_lib',
device='cuda',
dist_backend='nccl',
dist_on_itp=False,
dist_url='env://',
distributed=True,
epochs=3,
gpu=0,
input_size=224,
local_rank=0,
log_dir='./output_dir',
lr=0.003,
min_lr=0.0,
num_workers=0,
output_dir='./output_dir',
pin_mem=True,
rank=0,
resume='',
seed=0,
start_epoch=0,
warmup_epochs=1,
weight_decay=1.5e-06,
world_size=1)
[10:57:47.000363] <dataset.PreTrainTensorDataset object at 0x7f2d6f6a46d0>
[10:57:47.000448] Sampler_train = <torch.utils.data.distributed.DistributedSampler object at 0x7f2c9833a700>
[10:57:47.002473] base lr: 3.00e-03
[10:57:47.002497] actual lr: 3.00e-03
[10:57:47.002515] accumulate grad iterations: 16
[10:57:47.002534] effective batch size: 256
[10:57:47.132814] AdamW (
Parameter Group 0
amsgrad: False
betas: (0.9, 0.95)
eps: 1e-08
lr: 0.003
weight_decay: 0.01
)
[10:57:47.132945] Start training for 3 epochs
[10:57:47.133200] log_dir: ./output_dir
[W reducer.cpp:1303] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
Traceback (most recent call last):
File "fusion_pretrain.py", line 201, in
main(args)
File "fusion_pretrain.py", line 171, in main
train_stats = train_one_epoch(
File "/root/autodl-tmp/M3DM/engine_fusion_pretrain.py", line 48, in train_one_epoch
loss /= accum_iter
RuntimeError: Output 0 of _DDPSinkBackward is a view and is being modified inplace. This view was created inside a custom Function (or because an input was returned as-is) and the autograd logic to handle view+inplace would override the custom backward associated with the custom Function, leading to incorrect gradients. This behavior is forbidden. You can fix this by cloning the output of the custom Function.
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 6865) of binary: /root/miniconda3/bin/python3
Traceback (most recent call last):
File "/root/miniconda3/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/root/miniconda3/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/root/miniconda3/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in
main()
File "/root/miniconda3/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main
launch(args)
File "/root/miniconda3/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch
run(args)
File "/root/miniconda3/lib/python3.8/site-packages/torch/distributed/run.py", line 710, in run
elastic_launch(
File "/root/miniconda3/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/root/miniconda3/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 259, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
Hello author, I encountered this error while training UFF. May I know how to resolve it, thank you!!
root@autodl-container-b9e411b050-3a77f4bb:~/autodl-tmp/M3DM# OMP_NUM_THREADS=1 python3 -m torch.distributed.launch --nproc_per_node=1 fusion_pretrain.py --accum_iter 16 --lr 0.003 --batch_size 16 --data_path datasets/patch_lib /root/miniconda3/lib/python3.8/site-packages/torch/distributed/launch.py:178: FutureWarning: The module torch.distributed.launch is deprecated and will be removed in future. Use torchrun. Note that --use_env is set by default in torchrun. If your script expects
--local_rank
argument to be set, please change it to read fromos.environ['LOCAL_RANK']
instead. See https://pytorch.org/docs/stable/distributed.html#launch-utility for further instructionswarnings.warn( | distributed init (rank 0): env://, gpu 0 [10:57:46.999771] job dir: /root/autodl-tmp/M3DM [10:57:46.999924] Namespace(accum_iter=16, batch_size=16, blr=0.002, data_path='datasets/patch_lib', device='cuda', dist_backend='nccl', dist_on_itp=False, dist_url='env://', distributed=True, epochs=3, gpu=0, input_size=224, local_rank=0, log_dir='./output_dir', lr=0.003, min_lr=0.0, num_workers=0, output_dir='./output_dir', pin_mem=True, rank=0, resume='', seed=0, start_epoch=0, warmup_epochs=1, weight_decay=1.5e-06, world_size=1) [10:57:47.000363] <dataset.PreTrainTensorDataset object at 0x7f2d6f6a46d0> [10:57:47.000448] Sampler_train = <torch.utils.data.distributed.DistributedSampler object at 0x7f2c9833a700> [10:57:47.002473] base lr: 3.00e-03 [10:57:47.002497] actual lr: 3.00e-03 [10:57:47.002515] accumulate grad iterations: 16 [10:57:47.002534] effective batch size: 256 [10:57:47.132814] AdamW ( Parameter Group 0 amsgrad: False betas: (0.9, 0.95) eps: 1e-08 lr: 0.003 weight_decay: 0.01 ) [10:57:47.132945] Start training for 3 epochs [10:57:47.133200] log_dir: ./output_dir [W reducer.cpp:1303] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) Traceback (most recent call last): File "fusion_pretrain.py", line 201, in
main(args)
File "fusion_pretrain.py", line 171, in main
train_stats = train_one_epoch(
File "/root/autodl-tmp/M3DM/engine_fusion_pretrain.py", line 48, in train_one_epoch
loss /= accum_iter
RuntimeError: Output 0 of _DDPSinkBackward is a view and is being modified inplace. This view was created inside a custom Function (or because an input was returned as-is) and the autograd logic to handle view+inplace would override the custom backward associated with the custom Function, leading to incorrect gradients. This behavior is forbidden. You can fix this by cloning the output of the custom Function.
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 6865) of binary: /root/miniconda3/bin/python3
Traceback (most recent call last):
File "/root/miniconda3/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/root/miniconda3/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/root/miniconda3/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in
main()
File "/root/miniconda3/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main
launch(args)
File "/root/miniconda3/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch
run(args)
File "/root/miniconda3/lib/python3.8/site-packages/torch/distributed/run.py", line 710, in run
elastic_launch(
File "/root/miniconda3/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/root/miniconda3/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 259, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
fusion_pretrain.py FAILED
Failures: