PeterL1n / BackgroundMattingV2

Real-Time High-Resolution Background Matting
MIT License
6.81k stars 950 forks source link

Refiner seems to be a double edge sword #117

Closed jazzseow closed 3 years ago

jazzseow commented 3 years ago

I have trained a model with my own dataset. I noticed that the refiner does a very good job in patching missing area and removing noise. But the edges become rugged.

Even when i test with the model provided, comparing the edges only, there seems to be little improve, and sometimes i feel that mattingbase performs better than mattingrefine.

Do you experience this too?

PeterL1n commented 3 years ago

I feel that mattingbase performs better than mattingrefine.

MattingBase is limited to low-resolution. MattingRefine can do high resolution. I don't think you can directly compare the two. If your video is low-res, then just use MattingBase. If your video is high-res, then applying MattingBase directly is too computationally expensive.

jazzseow commented 3 years ago

Thank you for explaining, seems like I misunderstood the purpose of refiner.

YuLiHN commented 3 years ago

Hello jazzseow, I have the same question as you. When I trained the refine network like 20 epochs, other patch did perform better, but the edge of person become rugged. This kind of situation raises and becomes more serious with the epoch going up. I think my dataset is quite high resolution (3840x5120). Here is an example: image

How do you solve this?

PeterL1n commented 3 years ago

@YuLiHN

Increase the refiner sample pixels! The artifact in your image happens because the sample pixels value is set too low, and many pixels that are not at the immediate edge don't get refined.

YuLiHN commented 3 years ago

@YuLiHN

Increase the refiner sample pixels! The artifact in your image happens because the sample pixels value is set too low, and many pixels that are not at the immediate edge don't get refined.

Thanks for your advice! It did work!