kice / vs_mxnet

Use MXNet to accelerated Image-Processing in VapourSynth.
Mozilla Public License 2.0
31 stars 2 forks source link

[Feature Request] Adding support for only filtering a masked area #1

Open kriNon opened 5 years ago

kriNon commented 5 years ago

Hey, I'm working on a script where I am using waifu2x in vapoursynth, and I'm trying to speed up how fast the script it. I am trying to use waifu2x as a antialiasing filter, and as such I am running waifu2x in YUV mode in the denoising mode only on the luma plane. It's working fairly well, but I was thinking, since it's an AA filter, I only need to run waifu2x on the edges of the clip, so if it were possible to give vs_mxnet an edge mask and only process those pixels.

This could also be used in other ways too, for example if someone were to create a function that generates a mask of areas with noise, then it would be possible to only denoise those areas.

Let me know what your thoughts are. If you're not interested at all then feel free to close this issue.

Thanks

kice commented 5 years ago

Even you only the pixel of edges, but you have to feed waifu2x with the hole image. Since waifu2x is base on CNN which needs all pixel of the image to compute the final result; thus I don't think by adding a mask can gain any speed improvement. If you interest more, you might google how CNN works.

For partial image processing, I would suggest process the hole image then do the mask merge yourself due to how CNN works; unless you use one rectangle masking, which is able to crop the image to cut donw computational cost. But few people use the latter method.

If you have other suggestions or question, please let me know. Or you have your answer, you would close the issue.

kriNon commented 5 years ago

Hey thanks for the quick response!

Maybe I do misunderstand how waifu2x works. I would imagine that at a really basic level waifu2x is just a more complicated convolution calculation, and as such, since calculations for a convolution are done per pixel, that by simply not calculating values for certain pixels that it would be significantly faster.

I don't believe that feeding waifu2x the whole image is the slow part of the operation. I believe that the calculations for the convolutions are the slow part, and as such it should be faster if only part of the image is masked.