PeterL1n / RobustVideoMatting

Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!
https://peterl1n.github.io/RobustVideoMatting/
GNU General Public License v3.0
8.61k stars 1.14k forks source link

fast guilded filter question #101

Closed carter54 closed 3 years ago

carter54 commented 3 years ago

Hi Peter, thx for such a nice project.

I have a question about the fast guilded filter implemented in the model.

I saw you modified the boxfilter from the original:

original one: https://github.com/wuhuikai/DeepGuidedFilter/blob/0caaf2d78e2333ccbd7fcb01dafa685cad98f1b1/GuidedFilteringLayer/GuidedFilter_PyTorch/guided_filter_pytorch/box_filter.py#L26

yours: https://github.com/PeterL1n/RobustVideoMatting/blob/48effc91576a9e0e7a8519f3da687c0d3522045f/model/fast_guided_filter.py#L62

I did a simple test by randomly initializing three input tensors:

hr_x = torch.rand([100, 100])
hr_x = torch.unsqueeze(hr_x, 0)
hr_x = torch.unsqueeze(hr_x, 0)

lr_x = torch.rand([20, 20])
lr_x = torch.unsqueeze(lr_x, 0)
lr_x = torch.unsqueeze(lr_x, 0)

lr_y = torch.rand([20, 20])
lr_y = torch.unsqueeze(lr_y, 0)
lr_y = torch.unsqueeze(lr_y, 0)

r = 2

layer1 = FastGuidedFilterOri(r, eps=1e-8)  # the original one
result1 = layer1(lr_x, lr_y, hr_x)
print(result1[0, 0, ...].numpy())

layer2 = FastGuidedFilter(r, eps=1e-8)  # yours
result2 = layer2(lr_x, lr_y, hr_x)
print(result2[0, 0, ...].numpy())

but the results are totally different. Did I make something wrong here?

PeterL1n commented 3 years ago

The box filter could have some difference on the edge of the image. But if you compare the center of the image they should be the same.

Another way to implement box filter is to use AvgPool2d with stride 1. That effectively blurs the image. AvgPool2d can also set count_include_pad=False to get accurate blur at the edge.

carter54 commented 3 years ago

@PeterL1n I see, thx for the reply~, I think AvgPool2d should be the most efficient way to implement the box filter.