vt-vl-lab / FGVC

[ECCV 2020] Flow-edge Guided Video Completion
Other
1.55k stars 263 forks source link

Can you explain what this error means? #38

Open amilkis opened 3 years ago

amilkis commented 3 years ago

Hi there,

I was able to run the test (tennis) clips, but when I try it out on my own material I get the following error. I've tried it with 4K and HD material and I keep getting the same result:

miniconda3/lib/python3.8/site-packages/torch/nn/functional.py:3384: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details. warnings.warn("Default grid_sample and affine_grid behavior has changed " Traceback (most recent call last): File "video_completion.py", line 613, in main(args) File "video_completion.py", line 578, in main video_completion(args) File "video_completion.py", line 368, in video_completion mask_tofill, video_comp = spatial_inpaint(deepfill, mask_tofill, video_comp) File "/lgNet/users/andy/Downloads/FGVC-master/tool/spatial_inpaint.py", line 12, in spatial_inpaint img_res = deepfill.forward(video_comp[:, :, :, keyFrameInd] 255., mask[:, :, keyFrameInd]) / 255. File "/lgNet/users/andy/Downloads/FGVC-master/tool/frameinpaint.py", line 35, in forward , inpaintres, = self.deepfill(image.to(self.device), mask.to(self.device), small_mask.to(self.device)) File "/lgNet/users/andy/flameTimewarpML/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(input, kwargs) File "/lgNet/users/andy/Downloads/FGVC-master/models/DeepFill_Models/DeepFill.py", line 27, in forward stage2_output, offset_flow = self.stage_2(stage2_input, small_mask) File "/lgNet/users/andy/flameTimewarpML/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, *kwargs) File "/lgNet/users/andy/Downloads/FGVC-master/models/DeepFill_Models/DeepFill.py", line 78, in forward attn_x, offset_flow = self.CAttn(attn_x, attn_x, mask=resized_mask) File "/lgNet/users/andy/flameTimewarpML/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(input, kwargs) File "/lgNet/users/andy/Downloads/FGVC-master/models/DeepFill_Models/ops.py", line 343, in forward yi = F.softmax(yi*scale, dim=1) File "/lgNet/users/andy/flameTimewarpML/miniconda3/lib/python3.8/site-packages/torch/nn/functional.py", line 1498, in softmax ret = input.softmax(dim) RuntimeError: CUDA out of memory. Tried to allocate 3.91 GiB (GPU 0; 23.88 GiB total capacity; 16.52 GiB already allocated; 2.16 GiB free; 20.72 GiB reserved in total by PyTorch)

AnimeshMaheshwari22 commented 3 years ago

I faced the same issue. A probable solution to this is using image files of lesser size. But in the process of optimizing the image files with Pillow, I faced this error #41

gaochen315 commented 3 years ago

It seems like your image is too large. You can try reducing the resolution. Please make sure the height and weight should be divided by 8.

lec0dex commented 3 years ago

I've tweaked the code to use the disk and only pass frames to GPU when required (brought from RAFT). The process now can take larger amount of frames and greater resolutions. To optimize it further I also implemented Ray for multiprocessing and Zarr for storage.

I thank to the authors for this great work. I liked it so much that is my first time messing with python.

Check my fork at: https://github.com/lec0dex/FGVC