vt-vl-lab / 3d-photo-inpainting

[CVPR 2020] 3D Photography using Context-aware Layered Depth Inpainting
https://shihmengli.github.io/3D-Photo-Inpainting/
Other
6.91k stars 1.11k forks source link

Memory leak #85

Open Shelkey opened 4 years ago

Shelkey commented 4 years ago

When running the application from either a VM with Ubuntu or Google's colab with 6GB and 13.5GB ram respectively, the RAM usage continually increases until the application crashes. Is there a limiter or does it require every bit of RAM?

Shelkey commented 4 years ago

While the app usually doesn't crash in Colab, I should note that it is more prone to crashing when working with larger files or when the: fps: num_frames: longer_side_len:

arguments are increased

peymanrah commented 4 years ago

I think, the problem is with the way the depth matrix are being calculated in print(f"Writing depth ply (and basically doing everything) at {time.time()}") rt_info = write_ply(image, depth, sample['int_mtx'], mesh_fi, config, rgb_model, depth_edge_model, depth_edge_model, depth_feat_model)

This function has memory leakage for large images (high resolution over 1K). Basically, you need to restrict the resolution using the longer_side_len: to be a maximum of 1K or it crashes. Is there anyway we can fix this memory leakage bug ?

0SmooK0 commented 4 years ago

I am not well versed in virtual machines but if you can set it to have large Paging file (aka Virtual memory), it will reach into that and won't crash depending on how much you give.

zdhernandez commented 3 years ago

@peymanrah so how could we de-allocate or what do you think should be set to None to free memory resources ? I'm running on Docker and this is what I am getting:

Writing depth ply (and basically doing everything) at 1604091515.5040085 WARNING:py.warnings:/app/mesh_tools.py:174: RuntimeWarning: divide by zero encountered in true_divide input_disp = 1./np.abs(input_depth)

0%| | 0/1 [03:15<?, ?it/s] Traceback (most recent call last): File "main.py", line 523, in generate_2dto3d(config, batch_size=25) File "main.py", line 417, in generate_2dto3d process_image_2dto3d(config) File "main.py", line 302, in process_image_2dto3d depth_feat_model) File "/app/mesh.py", line 1941, in write_ply inpaint_iter=0) File "/app/mesh.py", line 1476, in DL_inpaint_edge cuda=device) File "/app/networks.py", line 311, in forward_3P edge_output = self.forward(enlarge_input) File "/app/networks.py", line 325, in forward x7 = self.decoder_2(torch.cat((x6, x1), dim=1)) File "/opt/conda/envs/3DP/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, *kwargs) File "/opt/conda/envs/3DP/lib/python3.7/site-packages/torch/nn/modules/container.py", line 100, in forward input = module(input) File "/opt/conda/envs/3DP/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(input, **kwargs) File "/opt/conda/envs/3DP/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 345, in forward return self.conv2d_forward(input, self.weight) File "/opt/conda/envs/3DP/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 342, in conv2d_forward self.padding, self.dilation, self.groups) RuntimeError: [enforce fail at CPUAllocator.cpp:64] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 213665792 bytes. Error code 12 (Cannot allocate memory)

firohuber commented 3 years ago

Did anyone find a way to fix this issue? I am encountering the same memory leak.