bowenc0221 / panoptic-deeplab

This is Pytorch re-implementation of our CVPR 2020 paper "Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation" (https://arxiv.org/abs/1911.10194)
Apache License 2.0
585 stars 117 forks source link

AssertionError: Default process group is not initialized #57

Closed kirqwer6666 closed 3 years ago

kirqwer6666 commented 3 years ago

Hi! when i start to train, it report the error below. I don't know how to deal with it.

[11/11 10:47:28 d2.engine.train_loop]: Starting training from iteration 0 ERROR [11/11 10:47:31 d2.engine.train_loop]: Exception during training: Traceback (most recent call last): File "/home/zyq/anaconda3/envs/qwer/lib/python3.8/site-packages/detectron2/engine/train_loop.py", line 134, in train self.run_step() File "/home/zyq/anaconda3/envs/qwer/lib/python3.8/site-packages/detectron2/engine/defaults.py", line 423, in run_step self._trainer.run_step() File "/home/zyq/anaconda3/envs/qwer/lib/python3.8/site-packages/detectron2/engine/train_loop.py", line 228, in run_step loss_dict = self.model(data) File "/home/zyq/anaconda3/envs/qwer/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, kwargs) File "/home/zyq/anaconda3/envs/qwer/lib/python3.8/site-packages/detectron2/projects/panoptic_deeplab/panoptic_seg.py", line 87, in forward features = self.backbone(images.tensor) File "/home/zyq/anaconda3/envs/qwer/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, *kwargs) File "/home/zyq/桌面/pdeeplab/tools_d2/d2/backbone.py", line 136, in forward y = super().forward(x) File "/home/zyq/桌面/pdeeplab/tools_d2/../segmentation/model/backbone/xception.py", line 190, in forward x = self.bn1(x) File "/home/zyq/anaconda3/envs/qwer/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(input, kwargs) File "/home/zyq/anaconda3/envs/qwer/lib/python3.8/site-packages/torch/nn/modules/batchnorm.py", line 519, in forward world_size = torch.distributed.get_world_size(process_group) File "/home/zyq/anaconda3/envs/qwer/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 625, in get_world_size return _get_group_size(group) File "/home/zyq/anaconda3/envs/qwer/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 220, in _get_group_size _check_default_pg() File "/home/zyq/anaconda3/envs/qwer/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 210, in _check_default_pg assert _default_pg is not None, \ AssertionError: Default process group is not initialized [11/11 10:47:31 d2.engine.hooks]: Total training time: 0:00:03 (0:00:00 on hooks) [11/11 10:47:31 d2.utils.events]: iter: 0 lr: N/A max_mem: 991M Traceback (most recent call last): File "train_panoptic_deeplab.py", line 192, in launch( File "/home/zyq/anaconda3/envs/qwer/lib/python3.8/site-packages/detectron2/engine/launch.py", line 62, in launch main_func(args) File "train_panoptic_deeplab.py", line 186, in main return trainer.train() File "/home/zyq/anaconda3/envs/qwer/lib/python3.8/site-packages/detectron2/engine/defaults.py", line 413, in train super().train(self.start_iter, self.max_iter) File "/home/zyq/anaconda3/envs/qwer/lib/python3.8/site-packages/detectron2/engine/train_loop.py", line 134, in train self.run_step() File "/home/zyq/anaconda3/envs/qwer/lib/python3.8/site-packages/detectron2/engine/defaults.py", line 423, in run_step self._trainer.run_step() File "/home/zyq/anaconda3/envs/qwer/lib/python3.8/site-packages/detectron2/engine/train_loop.py", line 228, in run_step loss_dict = self.model(data) File "/home/zyq/anaconda3/envs/qwer/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(input, kwargs) File "/home/zyq/anaconda3/envs/qwer/lib/python3.8/site-packages/detectron2/projects/panoptic_deeplab/panoptic_seg.py", line 87, in forward features = self.backbone(images.tensor) File "/home/zyq/anaconda3/envs/qwer/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, *kwargs) File "/home/zyq/桌面/pdeeplab/tools_d2/d2/backbone.py", line 136, in forward y = super().forward(x) File "/home/zyq/桌面/pdeeplab/tools_d2/../segmentation/model/backbone/xception.py", line 190, in forward x = self.bn1(x) File "/home/zyq/anaconda3/envs/qwer/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(input, kwargs) File "/home/zyq/anaconda3/envs/qwer/lib/python3.8/site-packages/torch/nn/modules/batchnorm.py", line 519, in forward world_size = torch.distributed.get_world_size(process_group) File "/home/zyq/anaconda3/envs/qwer/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 625, in get_world_size return _get_group_size(group) File "/home/zyq/anaconda3/envs/qwer/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 220, in _get_group_size _check_default_pg() File "/home/zyq/anaconda3/envs/qwer/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 210, in _check_default_pg assert _default_pg is not None, \ AssertionError: Default process group is not initialized

bowenc0221 commented 3 years ago

This is a PyTorch related issue. SyncBatchNorm cannot run on 1 GPU by its definition.

Check: https://github.com/facebookresearch/detectron2/issues/2174

kirqwer6666 commented 3 years ago

thank you for replying