Volcomix / virtual-background

Demo on adding virtual background to a live video stream in the browser
https://volcomix.github.io/virtual-background
Apache License 2.0
486 stars 124 forks source link

question about light wrapping #24

Open carter54 opened 3 years ago

carter54 commented 3 years ago

Hi @Volcomix , thx for this nice work.

I'm interested in the light wrapping function of the post process pipeline and try to rewrite the following function it in python.

https://github.com/Volcomix/virtual-background/blob/8530c56ce419618a3679ca379e76bfd1518a35f5/src/pipelines/webgl2/backgroundImageStage.ts#L68-L78

however, the output result is different with the output of your original code...

I paste my python implementation here

def screen(a, b):
    return 1.0 - (1.0 - a) * (1.0 - b)

def linear_dodge(a, b):
    return a + b

def clamp(x, lowerlimit, upperlimit):
    if x < lowerlimit:
        x = lowerlimit
    if x > upperlimit:
        x = upperlimit
    return x

def smooth_step(edge0, edge1, x):
    # Scale, bias and saturate x to 0..1 range
    x = clamp((x - edge0) / (edge1 - edge0), 0.0, 1.0)
    # Evaluate polynomial
    return x * x * (3 - 2 * x)

def light_wrapping(segmentation_mask, image, background, cov_x=0.6, cov_y=0.8, wrap_cof=0.3, blend_mode='linear'):
    light_wrap_mask = 1. - np.maximum(0, (segmentation_mask - cov_y))/(1 - cov_y)
    light_wrap = wrap_cof * light_wrap_mask[:, :, np.newaxis] * background
    if blend_mode == 'linear':
        frame_color = linear_dodge(image, light_wrap)
    elif blend_mode == 'screen':
        frame_color = screen(image, light_wrap)

    smooth = lambda i: smooth_step(cov_x, cov_y, i)
    vectorized_smooth = np.vectorize(smooth)
    person_mask = vectorized_smooth(segmentation_mask)
    return person_mask, frame_color

def combind_frond_back_ground(person_mask, image, background):
    person_mask = person_mask[:, :, np.newaxis]
    output_image = image * mix_value + background * (1.0 - mix_value)
    output_image = output_image.astype(np.uint8)
    return output_image

def main():
    image_src = "path to input image"
    image = cv2.imread(image_src)   # (720, 1280, 3)
    background_src = "path to background image"
    background = cv2.imread(background_src)  # assuming image and background have the same size (720, 1280, 3)
    segmentation_mask = "segmentation result from model"  # (720, 1280)
    person_mask, frame_color = light_wrapping(segmentation_mask, image, background)
    output_frame = combind_frond_back_ground(person_mask, frame_color, background)
    return output_frame

I feel something might be wrong in smooth_step, but I cannot find more information except for this link.

Could you help me check if there is an obvious error in this python code please?

Thx in advance~

Volcomix commented 3 years ago

Hi @carter54, I don't see anything obvious by reading the code. smooth_step also looks fine as it seems consistent with OpenGL documentation.

It might help if you could post the resulting output and maybe some intermediate outputs like the masks, the output before smooth_step and few others.

AndreaBrg commented 2 years ago

Hi @carter54, did you ever manage to implement the light wrapping in python?