PeterL1n / BackgroundMattingV2

Real-Time High-Resolution Background Matting
MIT License
6.85k stars 952 forks source link

Not working #15

Closed cioccolata12345 closed 3 years ago

cioccolata12345 commented 3 years ago

So I followed this tutorial on youtube, https://www.youtube.com/watch?v=HlOUKj6WP-s&list=PLmo1GBItOimXfKR5t4D3f0doSflEgUo9j&index=3&t=474s and installed everything I needed to install, activated everything, made sure picture and video were of same size and named properly and I cannot get the program to green screen me out. I have an NVIDIA graphics card. I used a sample image and video from this website and it worked, but mine wont work. It green screens random sections of the background but not everything. It's not a complicated scene, and it is on a tripod. Just me walking away for a few seconds and turning around. it is a 4k video. I cannot upload the original as it is too big so I am converting it too a smaller size and uploading for you to look at. Help me please.

https://user-images.githubusercontent.com/76640989/103165057-31404f80-47d0-11eb-9892-52d7993febda.mp4

PeterL1n commented 3 years ago

Where is the background image without the subject?

Our method is background matting. It requires an additional capture of the background without the subject. So you could start recording, then enter the scene, and later use the first frame as the background input.

When you run the inference_video.py script, the --video-bgr is asking you for this pre-captured background image, not the target background you want to composite on.

If you did capture a background, can you send me the file too?

cioccolata12345 commented 3 years ago

I have it as well, i didnt send it because i was just sending an example of what i am having a problem with. I am sending it.

On Mon, Dec 28, 2020 at 1:02 AM Peter Lin notifications@github.com wrote:

Where is the background image without the subject?

Our method is background matting. It requires an additional capture of the background without the subject. So you could start recording, then enter the scene, and later use the first frame as the background input.

When you run the inference_video.py script, the --video-bgr is asking you for this pre-captured background image, not the target background you want to composite on.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/PeterL1n/BackgroundMattingV2/issues/15#issuecomment-751623067, or unsubscribe https://github.com/notifications/unsubscribe-auth/ASIXFXJ4HERZASAOFVPGPN3SXA3S7ANCNFSM4VKRKAIQ .

bg

mowshon commented 3 years ago

The dimensions of the photo and video do not match. The photo shows that part of the fan did not enter the frame, although it is in the video.

cioccolata12345 commented 3 years ago

Like I said in my initial post, I had to resize the video because it was too large to send in this email. It wouldn’t let me because it was too big so I had to use handbrake and size it down and That’s why it doesn’t match. But the original 4K video and 4K PNG match.

On Tue, Dec 29, 2020 at 2:47 AM Student B. notifications@github.com wrote:

The dimensions of the photo and video do not match. The photo shows that part of the fan did not enter the frame, although it is in the video.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/PeterL1n/BackgroundMattingV2/issues/15#issuecomment-752015192, or unsubscribe https://github.com/notifications/unsubscribe-auth/ASIXFXJJEJQXVLVQCWWHMHTSXGQSRANCNFSM4VKRKAIQ .

PeterL1n commented 3 years ago

I see the point of @mowshon.

I understand that you resized the src.mp4, but if you just look at the fan at the very left. The bgr.png has part of it croped out of the frame where the src.mp4 has more fan part. So there is a shift between the src.mp4 and bgr.png, and that could be why the model failed. The src and bgr need to be very aligned.

You could try adding the --preprocess-alignment flag when using inference-video.py. This uses opencv to do alignment before passing into the model. But for the best result, make sure the background is aligned.

cioccolata12345 commented 3 years ago

Holy crap, I didn't even notice that. I appreciate your patience and advice. I will try and redo my video and image so they match and try again. I love the program and I hope I can get it to work. Thanks.

Ok, so I saved the very first frame of the video as a background, redid the sequence and it worked. It's not totally clean around the edges, but that's something I can clean up in post. Again, thanks for the quick responses and advice and for the great program. Works awesome.

On Tue, Dec 29, 2020 at 10:59 AM Peter Lin notifications@github.com wrote:

I see the point of @mowshon https://github.com/mowshon.

I understand that you resized the src.mp4, but if you just look at the fan at the very left. The bgr.png has part of it croped out of the frame where the src.mp4 has more fan part. So there is a shift between the src.mp4 and bgr.png, and that could be why the model failed. The src and bgr need to be very aligned.

You could try adding the --preprocess-alignment flag when using inference-video.py. This uses opencv to do alignment before passing into the model. But for the best result, make sure the background is aligned.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/PeterL1n/BackgroundMattingV2/issues/15#issuecomment-752184562, or unsubscribe https://github.com/notifications/unsubscribe-auth/ASIXFXL6R4WJISTVZKI2F3LSXIKH5ANCNFSM4VKRKAIQ .

PeterL1n commented 3 years ago

No problem. Let us know if you got it fixed.

cioccolata12345 commented 3 years ago

It worked now, thanks. 2 questions

  1. Not sure, but does it save the same quality as the original video? It says it's the same size but the quality is not the same. I am guessing the program is not able to save the same quality. If not, no worries. still works great.
  2. can you input all the other output files that are saved into blender to help with green screening?
PeterL1n commented 3 years ago
  1. Currently, the script uses opencv to save output as mp4 files. It is just a demo to show it works. We didn't really put too much effort into giving you the option of codec or bitrate control. You could output it as png image sequence by passing --output-format=image_sequences. Png images also comes with alpha channel, so you don't need to do color keying again.
  2. Our neural model actually produces alpha matte. but because videos don't support the alpha channel, so we composite the result onto a green background, which you may use another software to color key it away again, which is kinda stupid. I am not familiar with blender, but if you want to, you could use the fgr and pha output and composite yourself. Or switch to output image sequence.
cioccolata12345 commented 3 years ago

So I am trying to save it as an image sequence. I have the script with your "output" command. Not sure if I put it in the right place. "python inference_video.py --model-type mattingrefine --model-backbone resnet50 --model-checkpoint ./model/PyTorch/pytorch_resnet50.pth --model-refine-sample-pixels 320_000 --video-src ./input/src.mp4 --video-bgr ./input/bg.png --output-dir ./output ----output-format=image_sequence .png"

and it gives me the following script error-- --model-backbone {resnet101,resnet50,mobilenetv2} [--model-backbone-scale MODEL_BACKBONE_SCALE] --model-checkpoint MODEL_CHECKPOINT [--model-refine-mode {full,sampling,thresholding}] [--model-refine-sample-pixels MODEL_REFINE_SAMPLE_PIXELS] [--model-refine-threshold MODEL_REFINE_THRESHOLD] [--model-refine-kernel-size MODEL_REFINE_KERNEL_SIZE] --video-src VIDEO_SRC --video-bgr VIDEO_BGR [--video-resize VIDEO_RESIZE VIDEO_RESIZE] [--preprocess-alignment] --output-dir OUTPUT_DIR --output-types {com,pha,fgr,err,ref} [{com,pha,fgr,err,ref} ...] [--output-format {video,image_sequences}] inference_video.py: error: the following arguments are required: --output-types not sure if I am doing this wrong. Thanks for your help.

PeterL1n commented 3 years ago

Couple of errors. I think it should be:

python inference_video.py --model-type mattingrefine --model-backbone resnet50 --model-checkpoint ./model/PyTorch/pytorch_resnet50.pth --model-refine-sample-pixels 320000 --video-src ./input/src.mp4 --video-bgr ./input/bg.png --output-dir ./output --output-format image_sequences --output-types com

  1. It is --output-format not ----output-format.
  2. The error was telling you that you missed --output-types argument.
  3. --output-format should be image_sequences
cioccolata12345 commented 3 years ago

That worked perfect. Thanks. Are there going to be any upgrades in the future to change the video output options, or is there a way to tweak the setting manually ourselves to do that? If not, thank you for all your help and the awesome software. Will keep this page bookmarked for future updates, fingers crossed.

PeterL1n commented 3 years ago

Unlikely we will add more video options. Our focus is more on the deep learning research side and less so on the engineering side to build a fully useable product. The neural model is made public so technically people can build all sorts of products and options, but we will not be providing that.

cioccolata12345 commented 3 years ago

Thanks!

On Thu, Dec 31, 2020 at 1:40 PM Peter Lin notifications@github.com wrote:

Closed #15 https://github.com/PeterL1n/BackgroundMattingV2/issues/15.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/PeterL1n/BackgroundMattingV2/issues/15#event-4160219084, or unsubscribe https://github.com/notifications/unsubscribe-auth/ASIXFXNYT5KCS7TJX4S2BZLSXTOUFANCNFSM4VKRKAIQ .