NVlabs / mask-auto-labeler

Other
159 stars 14 forks source link

Multi-gpu issue for phase 1 #5

Closed tianyufang1958 closed 1 year ago

tianyufang1958 commented 1 year ago

I got issue with running python train.py in phase 1 with multiple GPU, I got the error 'ZeroDivisionError: float division by zero'. With single gpu it is fine.

I have another issue. As I would like to train with my own dataset with only one class in coco format. I modified the datasets/pl_data_module.py, but I am wondering if I also need to modify the datasets/coco.py as well. It would be really helpful to clarify how to work with customer data. Discobox works beautiful with customer dataset in a very easy way. Thanks!

voidrank commented 1 year ago

Hi @tianyufang1958

Could you provide more complex logs? The existing logs consist of a single line without any context, making it difficult to discern.

tianyufang1958 commented 1 year ago

Hi @tianyufang1958

Could you provide more complex logs? The existing logs consist of a single line without any context, making it difficult to discern.

Here is the full log of the error during the training of phase 1 with multiple gpus.

File "/workspace/mal_vol/mask-auto-labeler/main.py", line 201, in trainer.fit(model, data_loader) File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 582, in fit call._call_and_handle_interrupt( File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/call.py", line 38, in _call_and_handle_interrupt return trainer_fn(*args, kwargs) File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 624, in _fit_impl self._run(model, ckpt_path=self.ckpt_path) File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1061, in _run results = self._run_stage() File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1140, in _run_stage self._run_train() File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1163, in _run_train self.fit_loop.run() File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/loops/loop.py", line 199, in run self.advance(*args, *kwargs) File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 267, in advance self._outputs = self.epoch_loop.run(self._data_fetcher) File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/loops/loop.py", line 199, in run self.advance(args, kwargs) File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 214, in advance batch_output = self.batch_loop.run(kwargs) File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/loops/loop.py", line 199, in run self.advance(*args, kwargs) File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 90, in advance outputs = self.manual_loop.run(kwargs) File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/loops/loop.py", line 199, in run self.advance(*args, kwargs) File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/manual_loop.py", line 110, in advance training_step_output = self.trainer._call_strategy_hook("training_step", kwargs.values()) File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1443, in _call_strategy_hook output = fn(args, kwargs) File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/strategies/ddp.py", line 352, in training_step return self.model(*args, kwargs) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, kwargs) File "/opt/conda/lib/python3.8/site-packages/fairscale/nn/data_parallel/sharded_ddp.py", line 230, in forward return self.module(*inputs, *kwargs) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(input, kwargs) File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/overrides/base.py", line 98, in forward output = self._forward_module.training_step(*inputs, *kwargs) File "/workspace/mal_vol/mask-auto-labeler/models/mal.py", line 461, in training_step self.set_lr_per_iteration(optimizer, 1. local_step) File "/workspace/mal_vol/mask-auto-labeler/models/mal.py", line 466, in set_lr_per_iteration epoch = 1. * local_step / self._num_iter_per_epoch + self.current_epoch ZeroDivisionError: float division by zero

voidrank commented 1 year ago

Can you check the size of the dataset? it looks like the dataset is empty