ParthaEth / GIF

GIF is a photorealistic generative face model with explicit 3D geometric and photometric control.
https://gif.is.tue.mpg.de/
MIT License
405 stars 63 forks source link

CUDA out of memory #25

Closed wiamBoumaazi closed 1 year ago

wiamBoumaazi commented 2 years ago

I have got this issue when I try to run python generate_voca_animation.py

any hint?

Output: Collating FFHQ parameters Collating FFHQ parameters, done! <<<<<<<<<< Running Style GAN 2 >>>>>>>>>>>>>>>>>>>>>>> generator const_input n_params: 8192 generator to_rgb n_params: 1568667 generator progression n_params: 27955884 generator z_to_w n_params: 2101248 creating the FLAME Decoder /home/wiam/Desktop/GIF/my_utils/photometric_optimization/models/FLAME.py:92: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requiresgrad(True), rather than torch.tensor(sourceTensor). self.register_buffer('dynamic_lmk_faces_idx', torch.tensor(lmk_embeddings['dynamic_lmk_faces_idx'], dtype=torch.long)) /home/wiam/Desktop/GIF/my_utils/photometric_optimization/models/FLAME.py:93: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requiresgrad(True), rather than torch.tensor(sourceTensor). self.register_buffer('dynamic_lmk_bary_coords', torch.tensor(lmk_embeddings['dynamic_lmk_bary_coords'], dtype=self.dtype)) /home/wiam/anaconda3/envs/gif3/lib/python3.8/site-packages/pytorch3d/io/obj_io.py:533: UserWarning: Mtl file does not exist: /home/wiam/Desktop/GIF/GIF_resources/input_files//flame_resource/template.mtl warnings.warn(f"Mtl file does not exist: {f}") creating the FLAME Decoder /home/wiam/Desktop/GIF/my_utils/photometric_optimization/models/FLAME.py:92: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requiresgrad(True), rather than torch.tensor(sourceTensor). self.register_buffer('dynamic_lmk_faces_idx', torch.tensor(lmk_embeddings['dynamic_lmk_faces_idx'], dtype=torch.long)) /home/wiam/Desktop/GIF/my_utils/photometric_optimization/models/FLAME.py:93: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requiresgrad(True), rather than torch.tensor(sourceTensor). self.register_buffer('dynamic_lmk_bary_coords', torch.tensor(lmk_embeddings['dynamic_lmk_bary_coords'], dtype=self.dtype)) 0%| | 0/30 [00:00<?, ?it/s]/home/wiam/anaconda3/envs/gif3/lib/python3.8/site-packages/torch/nn/functional.py:3060: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. warnings.warn("Default upsampling behavior when mode={} is changed " 0%| | 0/30 [00:26<?, ?it/s] Traceback (most recent call last): File "generate_voca_animation.py", line 90, in overlay_visualizer.get_rendered_mesh(flame_params=(shape_batch, exp_batch, pose_batch, File "/home/wiam/Desktop/GIF/my_utils/visualize_flame_overlay.py", line 24, in get_rendered_mesh self.rendering_helper.render_tex_and_normal(shapecode=shape, expcode=expression, File "/home/wiam/Desktop/GIF/my_utils/photometric_optimization/gif_helper.py", line 33, in render_tex_and_normal albedos = self.flametex(texcode) File "/home/wiam/anaconda3/envs/gif3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, *kwargs) File "/home/wiam/Desktop/GIF/my_utils/photometric_optimization/models/FLAME.py", line 256, in forward texture = self.texture_mean + (self.texture_basistexcode[:,None,:]).sum(-1) RuntimeError: CUDA out of memory. Tried to allocate 4.69 GiB (GPU 0; 11.17 GiB total capacity; 1.63 GiB already allocated; 4.20 GiB free; 6.41 GiB reserved in total by PyTorch)

lyhdtc commented 2 years ago

I have got this issue when I try to run python generate_voca_animation.py

any hint?

Output: Collating FFHQ parameters Collating FFHQ parameters, done! <<<<<<<<<< Running Style GAN 2 >>>>>>>>>>>>>>>>>>>>>>> generator const_input n_params: 8192 generator to_rgb n_params: 1568667 generator progression n_params: 27955884 generator z_to_w n_params: 2101248 creating the FLAME Decoder /home/wiam/Desktop/GIF/my_utils/photometric_optimization/models/FLAME.py:92: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requiresgrad(True), rather than torch.tensor(sourceTensor). self.register_buffer('dynamic_lmk_faces_idx', torch.tensor(lmk_embeddings['dynamic_lmk_faces_idx'], dtype=torch.long)) /home/wiam/Desktop/GIF/my_utils/photometric_optimization/models/FLAME.py:93: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requiresgrad(True), rather than torch.tensor(sourceTensor). self.register_buffer('dynamic_lmk_bary_coords', torch.tensor(lmk_embeddings['dynamic_lmk_bary_coords'], dtype=self.dtype)) /home/wiam/anaconda3/envs/gif3/lib/python3.8/site-packages/pytorch3d/io/obj_io.py:533: UserWarning: Mtl file does not exist: /home/wiam/Desktop/GIF/GIF_resources/input_files//flame_resource/template.mtl warnings.warn(f"Mtl file does not exist: {f}") creating the FLAME Decoder /home/wiam/Desktop/GIF/my_utils/photometric_optimization/models/FLAME.py:92: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requiresgrad(True), rather than torch.tensor(sourceTensor). self.register_buffer('dynamic_lmk_faces_idx', torch.tensor(lmk_embeddings['dynamic_lmk_faces_idx'], dtype=torch.long)) /home/wiam/Desktop/GIF/my_utils/photometric_optimization/models/FLAME.py:93: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requiresgrad(True), rather than torch.tensor(sourceTensor). self.register_buffer('dynamic_lmk_bary_coords', torch.tensor(lmk_embeddings['dynamic_lmk_bary_coords'], dtype=self.dtype)) 0%| | 0/30 [00:00<?, ?it/s]/home/wiam/anaconda3/envs/gif3/lib/python3.8/site-packages/torch/nn/functional.py:3060: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. warnings.warn("Default upsampling behavior when mode={} is changed " 0%| | 0/30 [00:26<?, ?it/s] Traceback (most recent call last): File "generate_voca_animation.py", line 90, in overlay_visualizer.get_rendered_mesh(flame_params=(shape_batch, exp_batch, pose_batch, File "/home/wiam/Desktop/GIF/my_utils/visualize_flame_overlay.py", line 24, in get_rendered_mesh self.rendering_helper.render_tex_and_normal(shapecode=shape, expcode=expression, File "/home/wiam/Desktop/GIF/my_utils/photometric_optimization/gif_helper.py", line 33, in render_tex_and_normal albedos = self.flametex(texcode) File "/home/wiam/anaconda3/envs/gif3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(_input, **kwargs) File "/home/wiam/Desktop/GIF/my_utils/photometric_optimization/models/FLAME.py", line 256, in forward texture = self.texture_mean + (self.texture_basis_texcode[:,None,:]).sum(-1) RuntimeError: CUDA out of memory. Tried to allocate 4.69 GiB (GPU 0; 11.17 GiB total capacity; 1.63 GiB already allocated; 4.20 GiB free; 6.41 GiB reserved in total by PyTorch)

one trick is that set batch_size smaller....worked for me, good luck

ParthaEth commented 1 year ago

Sorry that I am only commenting, tis late. But I think reducing batch size is the only way to avoid out of memory issue that doesn't require complete recoding.