athn-nik / teach

Official PyTorch implementation of the paper "TEACH: Temporal Action Compositions for 3D Humans"
https://teach.is.tue.mpg.de
Other
383 stars 40 forks source link

Training is stopped because of CUDA OOM error after few epochs #7

Closed chinnusai25 closed 1 year ago

chinnusai25 commented 1 year ago

I tried training the setup in order to replicate the results provided but the training is being stopped after few epochs (in exact after 101 epoch, it had saved epoch 99 checkpoint). The exact error I get is as follows,

Traceback (most recent call last): File "/home/teach/train.py", line 48, in _train return train(cfg, ckpt_ft) File "/home/teach/train.py", line 131, in train trainer.fit(model, datamodule=data_module, ckpt_path=ckpt_ft) File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 740, in fit self._call_and_handle_interrupt( File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 685, in _call_and_handle_interrupt return trainer_fn(*args, kwargs) File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 777, in _fit_impl self._run(model, ckpt_path=ckpt_path) File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1199, in _run self._dispatch() File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1279, in _dispatch self.training_type_plugin.start_training(self) File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 202, in start_training self._results = trainer.run_stage() File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1289, in run_stage return self._run_train() File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1319, in _run_train self.fit_loop.run() File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/loops/base.py", line 145, in run self.advance(*args, *kwargs) File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/loops/fit_loop.py", line 234, in advance self.epoch_loop.run(data_fetcher) File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/loops/base.py", line 145, in run self.advance(args, kwargs) File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 193, in advance batch_output = self.batch_loop.run(batch, batch_idx) File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/loops/base.py", line 145, in run self.advance(*args, kwargs) File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 88, in advance outputs = self.optimizer_loop.run(split_batch, optimizers, batch_idx) File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/loops/base.py", line 145, in run self.advance(args, kwargs) File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 215, in advance result = self._run_optimization( File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 266, in _run_optimization self._optimizer_step(optimizer, opt_idx, batch_idx, closure) File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 378, in _optimizer_step lightning_module.optimizer_step( File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/core/lightning.py", line 1652, in optimizer_step optimizer.step(closure=optimizer_closure) File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/core/optimizer.py", line 164, in step trainer.accelerator.optimizer_step(self._optimizer, self._optimizer_idx, closure, kwargs) File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py", line 339, in optimizer_step self.precision_plugin.optimizer_step(model, optimizer, opt_idx, closure, kwargs) File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 163, in optimizer_step optimizer.step(closure=closure, kwargs) File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/torch/optim/optimizer.py", line 88, in wrapper return func(args, kwargs) File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, kwargs) File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/torch/optim/adamw.py", line 100, in step loss = closure() File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 148, in _wrap_closure closure_result = closure() File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 160, in call self._result = self.closure(*args, kwargs) File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 142, in closure step_output = self._step_fn() File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 435, in _training_step training_step_output = self.trainer.accelerator.training_step(step_kwargs) File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py", line 219, in training_step return self.training_type_plugin.training_step(step_kwargs.values()) File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 213, in training_step return self.model.training_step(args, kwargs) File "/home/teach/teach/model/base.py", line 53, in training_step return self.allsplit_step("train", batch, batch_idx) File "/home/teach/teach/model/teach.py", line 459, in allsplit_step output_features_1_M_with_transition = self.motiondecoder(latent_vector_1_M, lengths=length_1_with_transition) File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, kwargs) File "/home/teach/teach/model/motiondecoder/actor.py", line 68, in forward output = self.seqTransDecoder(tgt=time_queries, memory=z, File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, *kwargs) File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/torch/nn/modules/transformer.py", line 252, in forward output = mod(output, memory, tgt_mask=tgt_mask, File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(input, **kwargs) File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/torch/nn/modules/transformer.py", line 459, in forward x = self.norm3(x + self._ff_block(x)) File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/torch/nn/modules/transformer.py", line 483, in _ff_block x = self.linear2(self.dropout(self.activation(self.linear1(x)))) RuntimeError: CUDA out of memory. Tried to allocate 24.00 MiB (GPU 0; 10.92 GiB total capacity; 9.63 GiB already allocated; 9.44 MiB free; 10.22 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

athn-nik commented 1 year ago

Hello @chinnusai25 , this can happen. To be safe you should run the model in a GPU with at least 16GB/32GB memory or more. Alternatively, you reduce the batch size in the machine.yaml config. Let me know, if this happens with batch size smaller than 8 in your GPU.

chinnusai25 commented 1 year ago

Thanks for the reply, I will try it out, but the time to execute with a smaller batch size increases accordingly so is there a way to train this setup on multiple gpus instead of training it on a single gpu

athn-nik commented 1 year ago

I am closing this for now, since there is not further activity.