PeterL1n / BackgroundMattingV2

Real-Time High-Resolution Background Matting
MIT License
6.81k stars 950 forks source link

ZeroDivisionError: integer division or modulo by zero #14

Closed bigboss97 closed 3 years ago

bigboss97 commented 3 years ago

I have successfully converted a 440x440 video using colab. Now I'm trying with a HD video and received following error: !python inference_video.py \ --model-type mattingrefine \ --model-backbone resnet50 \ --model-backbone-scale 0.25 \ --model-refine-mode sampling \ --model-refine-sample-pixels 80000 \ --model-checkpoint "/content/model.pth" \ --video-src "/content/balconay_test.mp4" \ --video-bgr "/content/balcony_bg.jpg" \ --output-dir "/content/output/" \ --output-type com fgr pha err ref

0% 0/1 [00:00<?, ?it/s]Traceback (most recent call last): File "inference_video.py", line 178, in for src, bgr in tqdm(DataLoader(dataset, batch_size=1, pin_memory=True)): File "/usr/local/lib/python3.6/dist-packages/tqdm/std.py", line 1104, in iter for obj in iterable: File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 435, in next data = self._next_data() File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 475, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/content/BackgroundMattingV2/dataset/zip.py", line 17, in getitem x = tuple(d[idx % len(d)] for d in self.datasets) File "/content/BackgroundMattingV2/dataset/zip.py", line 17, in x = tuple(d[idx % len(d)] for d in self.datasets) ZeroDivisionError: integer division or modulo by zero 0% 0/1 [00:00<?, ?it/s]

PeterL1n commented 3 years ago

I suspect the encoding of your video is not supported. What is the video's codec and resolution?

bigboss97 commented 3 years ago

This is directly from my camera (43MB): Screenshot - 20-12-24 18 36 21

Then I re-rendered the clip with kdenlive (10MB): Screenshot - 20-12-24 18 37 20 It's still crashing with the same error.

PeterL1n commented 3 years ago

If it is convenient, send me your video and background image to peterlin9863@gmail.com

Qiang-Lin commented 3 years ago

we got exactly the same error. And the video-src and video-bgr I used were directly downloaded from the provided google drive.

Qiang-Lin commented 3 years ago

we got exactly the same error. And the video-src and video-bgr I used were directly downloaded from the provided google drive.

I find the reason. The cv2 can'ts get the information(such as width, height and frame_count ) of the video-src.

PeterL1n commented 3 years ago

@Qiang-Lin Can you provide the name of the video footage? Are you sure you are using the latest opencv as specified in our requirements.txt?

Qiang-Lin commented 3 years ago

Yes, the version is the same. Well, I tried dlh.mp4 and ao.mp4 in the fixed-camera folder and got the above problem. However, using other videos like vs2.mp4 can lead to a right result.

PeterL1n commented 3 years ago

I just tried dlh.mp4 and ao.mp4 in the provided Google Colab notebook. They work fine. In fact the default example video is dlh.mp4 so you don't even need to upload it yourself.

Running ao.mp4 will have index out of length error at the end but that is fine and it is irrelevant to your issue.

@Qiang-Lin Did you run it in Colab?

jinzishuai commented 3 years ago

Running ao.mp4 will have index out of length error at the end but that is fine and it is irrelevant to your issue.

Hi @PeterL1n are you seeing this index out of length error too?

This is my own video source:

(bgm2) C:\ZeroBox\src\BackgroundMattingV2> python inference_video.py --model-type mattingrefine --model-backbone resnet101 --model-checkpoint PyTorch\pytorch_resnet101.pth --video-src ..\..\group15B_Short.avi --video-bgr ..\..\background_group15B.png --output-dir output1 --output-type com fgr pha err ref --model-refine-mode full
Directory output1 already exists. Override? [Y/N]: Y
 25%|█████████████████████████████▊                                                                                          | 557/2241 [02:21<07:07,  3.94it/s]
Traceback (most recent call last):
  File "inference_video.py", line 178, in <module>
    for src, bgr in tqdm(DataLoader(dataset, batch_size=1, pin_memory=True)):
  File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\site-packages\tqdm\std.py", line 1171, in __iter__
    for obj in iterable:
  File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\site-packages\torch\utils\data\dataloader.py", line 435, in __next__
    data = self._next_data()
  File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\site-packages\torch\utils\data\dataloader.py", line 475, in _next_data
    data = self._dataset_fetcher.fetch(index)  # may raise StopIteration
  File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\site-packages\torch\utils\data\_utils\fetch.py", line 44, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\site-packages\torch\utils\data\_utils\fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "C:\ZeroBox\src\BackgroundMattingV2\dataset\zip.py", line 17, in __getitem__
    x = tuple(d[idx % len(d)] for d in self.datasets)
  File "C:\ZeroBox\src\BackgroundMattingV2\dataset\zip.py", line 17, in <genexpr>
    x = tuple(d[idx % len(d)] for d in self.datasets)
  File "C:\ZeroBox\src\BackgroundMattingV2\dataset\video.py", line 27, in __getitem__
    raise IndexError(f'Idx: {idx} out of length: {len(self)}')
IndexError: Idx: 557 out of length: 2241

I can confirm using ffmpeg that the source video only contains 557 frames so I am not sure how that number of 2241 is calcualted (it is almost 4x557). Is that a problem with my source video?

Thanks

PeterL1n commented 3 years ago

@jinzishuai Yeah it is some opencv problem.