MoyGcc / vid2avatar

Vid2Avatar: 3D Avatar Reconstruction from Videos in the Wild via Self-supervised Scene Decomposition (CVPR2023)
https://moygcc.github.io/vid2avatar/
MIT License
1.24k stars 103 forks source link

Memory Error #17

Closed freezecook closed 1 year ago

freezecook commented 1 year ago
Traceback (most recent call last):
  File "test.py", line 36, in main
    trainer.test(model, testset, ckpt_path=checkpoint)
  File "C:\Users\freez\anaconda3\envs\vid2avatar\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 907, in test
    return self._call_and_handle_interrupt(self._test_impl, model, dataloaders, ckpt_path, verbose, datamodule)
  File "C:\Users\freez\anaconda3\envs\vid2avatar\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 683, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "C:\Users\freez\anaconda3\envs\vid2avatar\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 950, in _test_impl
    results = self._run(model, ckpt_path=self.tested_ckpt_path)
  File "C:\Users\freez\anaconda3\envs\vid2avatar\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1195, in _run
    self._dispatch()
  File "C:\Users\freez\anaconda3\envs\vid2avatar\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1271, in _dispatch
    self.training_type_plugin.start_evaluating(self)
  File "C:\Users\freez\anaconda3\envs\vid2avatar\lib\site-packages\pytorch_lightning\plugins\training_type\training_type_plugin.py", line 206, in start_evaluating
    self._results = trainer.run_stage()
  File "C:\Users\freez\anaconda3\envs\vid2avatar\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1282, in run_stage
    return self._run_evaluate()
  File "C:\Users\freez\anaconda3\envs\vid2avatar\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1330, in _run_evaluate
    eval_loop_results = self._evaluation_loop.run()
  File "C:\Users\freez\anaconda3\envs\vid2avatar\lib\site-packages\pytorch_lightning\loops\base.py", line 145, in run
    self.advance(*args, **kwargs)
  File "C:\Users\freez\anaconda3\envs\vid2avatar\lib\site-packages\pytorch_lightning\loops\dataloader\evaluation_loop.py", line 110, in advance
    dl_outputs = self.epoch_loop.run(dataloader, dataloader_idx, dl_max_batches, self.num_dataloaders)
  File "C:\Users\freez\anaconda3\envs\vid2avatar\lib\site-packages\pytorch_lightning\loops\base.py", line 145, in run
    self.advance(*args, **kwargs)
  File "C:\Users\freez\anaconda3\envs\vid2avatar\lib\site-packages\pytorch_lightning\loops\epoch\evaluation_epoch_loop.py", line 122, in advance
    output = self._evaluation_step(batch, batch_idx, dataloader_idx)
  File "C:\Users\freez\anaconda3\envs\vid2avatar\lib\site-packages\pytorch_lightning\loops\epoch\evaluation_epoch_loop.py", line 213, in _evaluation_step
    output = self.trainer.accelerator.test_step(step_kwargs)
  File "C:\Users\freez\anaconda3\envs\vid2avatar\lib\site-packages\pytorch_lightning\accelerators\accelerator.py", line 244, in test_step
    return self.training_type_plugin.test_step(*step_kwargs.values())
  File "C:\Users\freez\anaconda3\envs\vid2avatar\lib\site-packages\pytorch_lightning\plugins\training_type\training_type_plugin.py", line 222, in test_step
    return self.model.test_step(*args, **kwargs)
  File "C:\Users\freez\Documents\python_Projects\Video2Avatar\vid2avatar\code\v2a_model.py", line 268, in test_step
    model_outputs = self.model(batch_inputs)
  File "C:\Users\freez\anaconda3\envs\vid2avatar\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\freez\Documents\python_Projects\Video2Avatar\vid2avatar\code\lib\model\v2a.py", line 159, in forward
    cond, smpl_tfs, feature_vectors=feature_vectors, is_training=self.training)
  File "C:\Users\freez\Documents\python_Projects\Video2Avatar\vid2avatar\code\lib\model\v2a.py", line 237, in get_rbg_value
    _, gradients, feature_vectors = self.forward_gradient(x, pnts_c, cond, tfs, create_graph=is_training, retain_graph=is_training)
  File "C:\Users\freez\Documents\python_Projects\Video2Avatar\vid2avatar\code\lib\model\v2a.py", line 269, in forward_gradient
    output = self.implicit_network(pnts_c, cond)[0]
  File "C:\Users\freez\anaconda3\envs\vid2avatar\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\freez\Documents\python_Projects\Video2Avatar\vid2avatar\code\lib\model\networks.py", line 104, in forward
    x = lin(x)
  File "C:\Users\freez\anaconda3\envs\vid2avatar\lib\site-packages\torch\nn\modules\module.py", line 1120, in _call_impl
    result = forward_call(*input, **kwargs)
  File "C:\Users\freez\anaconda3\envs\vid2avatar\lib\site-packages\torch\nn\modules\linear.py", line 103, in forward
    return F.linear(input, self.weight, self.bias)
  File "C:\Users\freez\anaconda3\envs\vid2avatar\lib\site-packages\torch\nn\functional.py", line 1848, in linear
    return torch._C._nn.linear(input, weight, bias)
RuntimeError: CUDA out of memory. Tried to allocate 196.00 MiB (GPU 0; 8.00 GiB total capacity; 6.83 GiB already allocated; 0 bytes free; 7.15 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
Testing:   0%|                                                                                                                                                                                                                                         | 0/42 [00:23<?, ?it/s]

(vid2avatar) C:\Users\freez\Documents\python_Projects\Video2Avatar\vid2avatar\code>nvidia-smi
Sat May 20 14:47:12 2023
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 531.14                 Driver Version: 531.14       CUDA Version: 12.1     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                      TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf            Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 3050       WDDM | 00000000:09:00.0  On |                  N/A |
|  0%   38C    P8                9W / 130W|   1130MiB /  8192MiB |      3%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

This is a similar issue to https://github.com/MoyGcc/vid2avatar/issues/5. However, I'm using an RTX 3050 with 8 GB VRAM. Reducing the sampled pixels doesn't seem to work in my case.

MoyGcc commented 1 year ago

Hi, in https://github.com/MoyGcc/vid2avatar/issues/5 the memory issue occurs during training. You could try to reduce the pixel_per_batch attribute in https://github.com/MoyGcc/vid2avatar/blob/main/code/confs/dataset/video.yaml#L37 which will help to consume less memory during the test. But all my experiments are run on RTX 3090 (24GB), so running this repo on GPU with larger VRAM is highly recommended.

freezecook commented 1 year ago

Thanks for responding. It seems that setting pixel_per_batch to 512 does the trick for me.

MoyGcc commented 1 year ago

Cool. Closing this issue.