Closed ecsplendid closed 3 years ago
Here is the video I made to reproduce the problem
!gdown https://drive.google.com/uc?id=1q2MzIcsXIknxzKJSms9VxU2icVz3YV8R -O /content/tim.mp4 -q
I can reproduce it on your Google Collab too so it's not my machine/environment
Our model is called Background Matting, which requires you to supply an additional background capture without the subject. You didn't seem to provide a correctly pre-captured background through the --video-bgr argument. You are still using the demo bgr.png which is definitely not correct.
Oh! My bad sorry I feel dumb now -- I should have read your paper before trying this. I think I saw BGR as blue,green,red rather than "background" lol
Do you think it would be possible to use your refiner idea on top of a standard salient object segmentation model and skip the need to have the background photo? Does it confer a huge improvement?
Edit -- I've got it working and the results are really nice!
Machine utilisation:
You can definitely adopt the refiner idea to segmentation networks. This requires you to experiment and train your own network. It will definitely allow processing higher resolution images with much lower resources than passing through a standard conv-net, but it may not able to recover all the hair details without a background.
Our model is called Background Matting, which requires you to supply an additional background capture without the subject. You didn't seem to provide a correctly pre-captured background through the --video-bgr argument. You are still using the demo bgr.png which is definitely not correct.
Hi, @PeterL1n .I have supply an additional background capture without the subject by my mobile phone, but it is still not work. Is there anything else I need to modify or supply?
Thanks a lot.
"inference_video.sh" 17L, 623C 1,1 All python3 inference_video.py \ --model-type mattingbase \ --model-backbone resnet50 \ --model-backbone-scale 0.25 \ --model-refine-mode sampling \ --model-refine-sample-pixels 80000 \ --model-checkpoint ./PyTorch/pytorch_resnet50.pth \ --video-src "./1615342262545425.mp4" \ --video-bgr "./WechatIMG125.jpeg" \ --video-resize 1920 1080 \ --output-dir "./test" \ --output-type com fgr pha err
This is my video-gbr.
Change model-type to mattingrefine.
You better provide video frame, background image, and model output to let me debug.
I think the background needs to have the same size/scale as the input video frame size
When I use your src.mp4 i.e.
It works great
However when I use one of my videos (h264 1080) it doesn't work at all
This is the alpha:
From this input:
From running this command (notice I also get an error message but video still produced)