fastnlp / fastNLP

fastNLP: A Modularized and Extensible NLP Framework. Currently still in incubation.
https://gitee.com/fastnlp/fastNLP
Apache License 2.0
3.05k stars 451 forks source link

多卡并行训练 #458

Closed Shaun-Wong closed 1 month ago

Shaun-Wong commented 3 months ago

官方手册有没有链接啊,这个失效了,怎么设置并行训练模型?

Shaun-Wong commented 3 months ago

Exception info: { "exc_type": "RuntimeError", "exc_value": "Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel, and by \nmaking sure all forward function outputs participate in calculating loss. \nIf you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's forward function. Please include the loss function and the structure of the return value of forward of your module when reporting this issue (e.g. list, dict, iterable).\nParameter indices which did not receive grad for rank 1: 390 391 412 413 414 415 416\n In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error", "exc_time": "2024-04-06-16:45:07", "exc_global_rank": 1, "exc_local_rank": 1 } Start to stop these pids:[127129, 127430, 127650], please wait several seconds. Traceback (most recent call last): File "/data/tmp/EfficientSAM/mine/train.py", line 218, in trainer.run(num_train_batch_per_epoch=-1, num_eval_batch_per_dl=-1, num_eval_sanity_batch=1) File "/data/miniconda3/envs/efficientsam/lib/python3.9/site-packages/fastNLP/core/controllers/trainer.py", line 711, in run raise e File "/data/miniconda3/envs/efficientsam/lib/python3.9/site-packages/fastNLP/core/controllers/trainer.py", line 687, in run self.train_batch_loop.run(self, self.dataloader) File "/data/miniconda3/envs/efficientsam/lib/python3.9/site-packages/fastNLP/core/controllers/loops/train_batch_loop.py", line 64, in run raise e File "/data/miniconda3/envs/efficientsam/lib/python3.9/site-packages/fastNLP/core/controllers/loops/train_batch_loop.py", line 55, in run self.batch_step_fn(trainer, batch) File "/data/miniconda3/envs/efficientsam/lib/python3.9/site-packages/fastNLP/core/controllers/loops/train_batch_loop.py", line 76, in batch_step_fn outputs = trainer.train_step(batch) File "/data/miniconda3/envs/efficientsam/lib/python3.9/site-packages/fastNLP/core/controllers/trainer.py", line 1303, in train_step outputs = self.driver.model_call(batch, self._train_step, self._train_step_signature_fn) File "/data/miniconda3/envs/efficientsam/lib/python3.9/site-packages/fastNLP/core/drivers/torch_driver/ddp.py", line 512, in model_call return self.model(batch, fastnlp_fn=fn, fastnlp_signature_fn=signature_fn, File "/data/miniconda3/envs/efficientsam/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/data/miniconda3/envs/efficientsam/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 994, in forward if torch.is_grad_enabled() and self.reducer._rebuild_buckets(): RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel, and by making sure all forward function outputs participate in calculating loss. If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's forward function. Please include the loss function and the structure of the return value of forward of your module when reporting this issue (e.g. list, dict, iterable). Parameter indices which did not receive grad for rank 1: 390 391 412 413 414 415 416 In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error 已杀死 怎么解决?

yhcc commented 2 months ago

应该是在Trainer初始化的时候传一个额外的参数

image
Shaun-Wong commented 2 months ago

应该是在 Trainer 初始化的时候传一些额外的参数 图像

您好 中文文档的链接失效 什么时候能够恢复一下

houdawang commented 2 months ago

应该是在 Trainer 初始化的时候传一些额外的参数 图像

您好 中文文档的链接失效 什么时候能够恢复一下

请问你解决了吗

Shaun-Wong commented 1 month ago

应该是在 Trainer 初始化的时候传一些额外的参数图像

您好中文文档的链接什么时候能够恢复一下

请问你解决了吗

你好 没有