Closed TatianaSnauwaert closed 2 years ago
It would be easier to diagnose this issue with input images.
But if you want to retrain the model anyway, it would probably work even without a trimap if your test images are similar. See e.g. https://github.com/ZHKKKe/MODNet where the authors do something similar.
@99991 Thanks for your reply! Let me attach my input image as well.
I tried this repo you suggested but it gives similar issues with the hair...
You are right, MODNet seems to be surprisingly bad with green screen backgrounds.
I tested a few matting methods from PyMatting (because I do not have a GPU right now) and the result was less green with large kernel matting, but the outline is slightly less sharp. It might help to use PNG instead of a lossy image format like JPEG to avoid artifacts.
But training a neural network specifically for green screen backgrounds should likely produce better results in less time.
from pymatting import *
import numpy as np
import scipy.ndimage.morphology
import urllib.request
urllib.request.urlretrieve("https://user-images.githubusercontent.com/63235607/143055624-b7a1033e-4021-48f2-8f48-8d975b1d70ca.jpg", "image.jpg")
# Better: Should use png image to avoid JPEG artifacts
# or feed in the input image directly
image = load_image("image.jpg", "RGB")
# Some hacky code to generate a trimap for green background
r, g, b = image.transpose(2, 0, 1)
is_bg = (r < 0.4) & (g > 0.3) & (b < 0.3)
is_fg = np.logical_not(is_bg)
x = np.linspace(-1, 1, 21)
x, y = np.meshgrid(x, x)
structure = x*x + y*y < 1.0
is_fg = scipy.ndimage.morphology.binary_erosion(is_fg, structure=structure, border_value=1)
is_bg = scipy.ndimage.morphology.binary_erosion(is_bg, structure=structure, border_value=1)
trimap = 0.5 + 0.5 * is_fg - 0.5 * is_bg
# Save trimap if you want to look at it (optional)
save_image("trimap.png", trimap)
print("Computing alpha matte. This might take a while.")
alpha = estimate_alpha_lkm(image, trimap, laplacian_kwargs={"radius": 30})
foreground = estimate_foreground_ml(image, alpha)
cutout = stack_images(foreground, alpha)
# Save result
save_image("alpha.png", alpha)
save_image("cutout.png", cutout)
@99991 Thank you so much for your efforts! I tested your code and it works pretty well for me! However, sometimes I get this issue:
I'm thinking it has to do with how a trimap for green background is generated...do you have any advice for that? I'll attach the original input as well:
Thanks in advance for all your help!
Thanks @99991 for the good answers on this.
Hi @TatianaSnauwaert I would add that you can use FBA matting for this. The way Thomas has defined the trimap will be much better for green screens than using deeplab-v3.
This is the crucial piece of code.
is_bg = (r < 0.4) & (g > 0.3) & (b < 0.3)
The parameters will need tuning however for each green screen. And it can be sensitive to noise and different lighting conditions. Or if the person is wearing a green top it will fail.
Sadly there is no easy plug and play solution right now.
However, I think if you finetuned any matting model with green screen backgrounds though that you'd get a good result.
Hi @MarcoForte! Thank you for your suggestions! I will test FBA matting with these trimaps as well! Yes, probably I will need to retrain the model with green screen backgrounds to get more stable results! @99991 @MarcoForte Thank you both for your help!
Hello!
I am trying to apply the FBA Matting method to removing green screen from images (portraits of people shot on a green screen). Overall, it works well but it leaves some green pixels mixed with the hair (I attach here an example).
I can see 2 possible reasons for this issue:
Thank you in advance for any help! I would truly appreciate it!